This is a thread for discussion and critic about transhumanism and singularitarianism, please keep it civil and on topic :)
Discussion about cyberpunk is permitted as long as it's NOT fiction, larping is NOT permitted, make your own thread for that kinf of thing.
Discussion about technological dystopia is permitted but note that we already have a thread here >>>/g/3342 so you should use that.
Transhumanism(abbreviated as H+ or h+) is an international philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology.
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
Human enhancement We already use technology to enhance our ability since millennia, think for example simple tool like a pickaxe, an axe or the plough such technology helped human achieve things that would have never been possible by own mere body alone, transhumanism advocates that we should move to the next step and start enhancing out body and our brain, this is the next step of human evolution we shall become Übermensch.
Freedom from the human condition Transhumanism aims to free humans from the human condition, living as a human should be a choice(and a right) not something you're forced onto.
Singularity The unavoidable advancement in artificial intelligence and information technology is soon gonna put humans in an existential crisis, soon a world where humans are not needed anymore is gonna come, transhumanism aims to prepare humans for this future by giving them the tools(enhancement) to stay relevant in a post-human society.
Hopefully the next generation doesn't put (((smart devices))) in their brains or simulate their brain on a centralized server where their thoughts can be monitored and controlled.
>>3969 As you read my post, I'm controlling your brain. You mindlessly consume the information I feed you and that might influence your further decisions.
Now think again and consider how many irrational beliefs and ideas you have running around under the hood. Did YOU put it over there YOURSELF? I think NOT.
>>3969 I don't have much trust in the new generation, but i think that when it comes to their brain and their body people are more scared of what could happen if shit goes wrong, so i doubt people are gonna be that stupid. My concern if that government is gonna restrict human experimentation on enhancements only to government approved companies and this is gonna create a closed ecosystem(what we need is the opposite, open source/open hardware development).
>>3970 You might not know star-chan but i just hacked your ghost :P!
>>3972 I didn't want to include too much in the first post, well i guess sapient robots would be part of the Singularity paragraph. Anyway AGI should be built before the robotical body, btw do you think a human-like AGI could live an "happy" life without a body? Or should a robot body be part of the AI rights? Should the AI even have rights :P ?
>It's obviously not what my post is about, but what do you see?
Uhhm were you arguing that the brain receives external inputs(like your post) that influences behavior? Cause that's pretty obvious, if not i'm lost.
>>3973 >Uhhm were you arguing that the brain receives external inputs(like your post) that influences behavior? Cause that's pretty obvious, if not i'm lost.
Nevermind.
I just thought I could get some insight about myself. Not that I deserve it though.
>Should the AI even have rights :P ?
Obviously. Unless we build some slave robot that would be conditioned to do its job.
>should a robot body be part of the AI rights?
Robots should be able to exist freely with and without bodies, obviously.
>do you think a human-like AGI could live an "happy" life without a body?
No idea, but what if we are human-like AGIs stuck inside a simulation? There are no reasons to believe it, but what would be the difference? So, can a person live a happy life?
>Anyway AGI should be built before the robotical body
Actually I don't know. One of the possible criteria for the AGI to actually pass as a strong AI is to be able to do everything humans can do, and that would require a physical body.
Anyway, there's a lot to think about. And by "think" I mean "retardedly stare at a wall thinking what the fuck consciousness even is".
transhumanism is botnet
pic semi-related
>Discussion about cyberpunk is permitted as long as it's NOT fiction, larping is NOT permitted, make your own thread for that kinf of thing.
but the topic of this thread already is cypb3rpun3k-la1n-larp-tier
>>3970 >Hopefully the next generation doesn't put (((smart devices))) in their brains or simulate their brain on a centralized server where their thoughts can be monitored and controlled.
they will do both, even if it's found that brain does not encompass self. and even before any real transhuman product exists. as long as it pretends to work that's good enough
>>3971 >people are more scared of what could happen if shit goes wrong, so i doubt people are gonna be that stupid.
just wait another generation or two
>>3974 >No idea, but what if we are human-like AGIs stuck inside a simulation? So, can a person live a happy life?
It's definitely possible that we live in a simulation, but for sure we are not able to replicate the simulation perfectly(actually we're not even close) so a human-like AGI maybe would not be able to be happy by living only in a digital space, by happiness i mean something like: able to experience the full spectrum of human experiences, that would also include pain i guess since you cannot have happiness without pain. Eh i am just rambling don't take me too seriously xD
>Anyway, there's a lot to think about. And by "think" I mean "retardedly stare at a wall thinking what the fuck consciousness even is".
Creating an AI makes you think about this kind of stuff, consciousness is often defined as the ability to be aware of the present, but that's actually impossible since the inputs that for example the eye percepts needs time to get to the brain and then to get computed by the brain in a understandable image, this make it impossible to perceive the present(basically we live a small fraction of time in the past), this would be similar for a robot of course. Another interpretation of consciusness requires a dualistic approach, where a part of the mind is aware of the other(basically the inner dialogue), maybe we should design AI that way.
Transhumanism is pointless without eugenics.If you put all Europeans and intelligent Asians together the decently intelligent population is only a quarter of world's population at best.You may say that countries like Korea or Japan won't be affected by demographic change but they will eventually dumb themselves down.That's because in highly competitive workaholic countries intelligent people do not have the time to have children or only have 1 child and majority of the breeders are again stupid people.
Basically modern civilizations are all self destructive if birth rates are not regulated.China had a great eugenic idea with the 1 child policy because it evened out the birth rates of dumb and intelligent people.
>>3979 >Transhumanism is pointless without eugenics.
It's more like: eugenics is pointless with transhumanism, with enhancements you would be able for example to raise the intelligence(computing power, memory etc) of lower IQ people. In general transhumanism is an artifial solution to biological limitations. Also i guess that genetic engineering in also part of transhumanism altough i would prefer enhancements othen gen. eng.
Transhumanism is a religion of failed human beings, who believe that acquiring external modifications would somehow change their failed spirits.
Basically idol worship, with imaginary calculators to solve everyone's problems instead of a wooden idol of Tiwaz or Wodanaz or whoever.
Except when ancient pagans asked for rains or luck in raids, transh*manists ask for ghey shit like eternal life or prosperity out of nothing. Why live eternally if your life is shit already, do you want to eternally suffer your failure as humans?
>>3996 It's not a religion, at most it'a philosophy, you also seems to not understand what transhumanism is(i left some resources for you in OP), you describe it as "acquiring external modifications to somehow change their failed spirits", that describe more a smartphone addict instagram whore, a transhumanism wants internal modification to trascend human limits, the philosophical part of transhumanism is just a small side of it, transhumanism is actually a pretty pragamatic ideology, the dangers of the future are real and the possibility of humans becoming obsolete is real, the most rational approach to this problem is ehnancing the human, also most of religiosn ask for a god to solve their problems(with the exception of buddhism that pushes to you to accepting the problems and stop fighting them), there's literally no religion that wants to overcome human limits.
>>3976 >by happiness i mean something like: able to experience the full spectrum of human experiences
If you put it that way, then no, human-like AGI will require a body to live a NORMAL human life. Without it, it won't be able to be a HUMAN-like AGI.
>ability to be aware of the present
Is an ant colony aware of the present? Are a worm or a jellyfish aware of the present? This way of general description gets us nowhere.
>Another interpretation of consciusness requires a dualistic approach, where a part of the mind is aware of the other
This doesn't really hold, I think, because clearly a part of human mind is aware of that part that is aware of itself. If you want to get poetic, consciousness looks like it is never complete, building itself up eternally. And yet, we can define it somehow. Or can we? Like, if a human brain is a finite-state automaton, there MUST be a way to describe it formally. And the other question then becomes, where is the consciousness and how to put it to the absolute. Like, human consciousness is definitely limited in a way to solve human problems, making it fit a certain (unknown) set of tasks. Can we actually expand it?
Well, this talk is rather vain, but it's still cool, I guess.
>3998
>at most it'a philosophy
People use this word too often nowadays with no right to use it. Does it deal with the ontology, metaphysics, epistemology, aesthetics or ethics?
No, it is a blind belief in imaginary AI gods and imaginary virtual immortality that would somehow fix things forever. Or destroy things forever - than it is a totalitarian death cult, and a banal one at that.
At least the pagan gods of wood and stone and sacred groves were "real", as in they had an actual manifestation you could come worship and commune with. Transhumans are people who preemptively worships gods who don't even yet exist (and quite possibly never will exist) and believe in immortality (digital or corporeal) that is technically still impossible (and quite possibly will never exist). At least Jesus offered immediate Salvation, not something that maybe someday may become somewhat available in the far future.
Transhumanism, a faux-religion, is a curious mixture of inherent human drive to spirituality with modern decadent and deranged society that worships its tools. The people are so craven of their spirituality yet so obsessed with their own insignificant egos, they masquerade their longing for gods in worshiping their petty toys they use to track political dissidents or play vidya.
A certain Frank Herbert, both a sci-fi writer an a man of spiritual inclinations, called that the age of thinking machines. As in - the age where humans were so decadent and depraved and so totally dependent on its machinery it might have as well worshiped it like some techno slaves. Funny how it provoked a likewise religious reaction where making a calculator was a capital offense heresy, so people had to actually use their inherent abilities for once.
Seriously now, check >>3997. Transhumanism is a rehash of Protestantism for plebs obsessed with calculators in like 99.9% of its incarnations.
It is also terribly Christian in its metaphysics, if we touch real philosophy for once. Resurrecting frozen corpses is a Christian resurrection in flesh on the Judgement Day, a petty belief otherwise. Now digital uploading - oh boy, it presupposes that consciousness is an object that can be copied or even moved. Basically it states consciousness=soul, which is a strictly Abrahamic thing, as most Dharmic religions deny existence of a soul an see mind as a process, not an object. You can replicate a process, but not copy or "up/down-load" it. And pagan opinions varied too, with some adhering to composite souls in one body or no soul at all.
So, again, Transhumanism in its "Philosophy" is a strictly Western strictly Christian... eh... heresy? Waiting for the Second Coming of the Digital Jesus AI to make Judgement on the world and Upload the faithfull into the Digital Paradise of Immortal Bliss.
Now that I satisfied my inner preacher, we can discuss some nitpicks:
>a transhumanism wants internal modification to trascend human limits
This statement presupposed that a transhumanist knows his limits AND needs to surpass them. This also presupposes that there are needs that MUST be satisfied by superhuman feats/abilities. I wonder what are those Grand Goals a transhumanist Must reach with Superhuman. Because I seriously doubt absolute majority of them ever exploited their humanity to its limits, if ever at all.
Why even think human limits are something bad and so oppressive? This is the Fallen nature of humans, the Original Sin, both Christian notions! They are both alien to East Asians, Indians, hell even Europeans who had their share of the Classics.
Look, I was a transhumanist before and even dabbled in cryogenic preservation of corpses for a little bit. I know perfectly well who transhumanists are. A modernized bland Protestant heresy full of bugmen infested with scientism (itself a false religion) atop of some Christian leftovers. Like any Christian bugman, they are paupers in spirit waiting for CYBER Jesus to give them free shit someday. Except at least Christian have the decency to ask for spiritual things, not material toys.
>the dangers of the future are real
Oh yes they are. Civilization is already eating itself in overdrive, with modern people somehow having to work more than a medieval serf while having even less security, privacy or even physical space. Progress my ass, no Cyber Jesus will come to save them.
>the most rational approach to this problem is ehnancing the human
Again, you presume an existence of a problem that is in itself an irrational notion of a Fallen human nature due to the Original Sin. You can find rational solutions to irrational problems that don't even exist for ~70% of world population that hasn't been Christian for 1-2 thousand years in a row.
>also most of religiosn ask for a god to solve their problems
No. Not quite. Religions seek to work with the gods, attune themselves to gods and thus have a Right life, that is a good life in accordance with the natural order of things. Even Christians of not-as-bugman variety, where asking Jesus for free shit is tertiary to "attuning" yourself to Jesus to accept his Salvation, that culminates in the most sacred Mystery of the Holy Communion with God - or at least it is so in Orthodox an Roman Catholic branches, while Protestants are full of weird cults like Transhumanism.
>buddhism that pushes to you to accepting the problems and stop fighting them
This is not Buddhism. They are a scary bunch of people who do not "accept" "problems" at all, if they practice what they preach at all.
>and the possibility of humans becoming obsolete is real
Bugman mentality. A bugman is so poor of intrinsic meaning or value it thinks of itself as merely a tool of production, and has existential crises that a machine does his stupid job better. Holy cow, for the entirety of human history except for last 200 years mostly in Europe and North America people despised labour and production so much they barely tolerated economics as something to quickly take care of and finally go do human things. Can you imagine asking a man of Classical times what he thinks of machines that will somehow make his life in pursuit of virtus, fides, gravitas and dignitas "meaningless" because... a machine does better job than a slave, so slaves may go die obsolete... because humans have nothings else to do but stupid work... but just make tools that decides humanity's fate for themselves? There will be zero communication, because think of yourself as an obsolete production unit is terminal slavery.
>there's literally no religion that wants to overcome human limits.
LITERALLY EVERY religion deals with things ABOVE and BEYOND merely human! Literally! Every! Dude, lmao! Why would anyone preoccupy himself with a religion if he didn't want to re-legare, to re-connect with the Supreme, the Dharma, the whatever seen everywhere if one could so much as sit still for a minute and take a careful look.
Transhumanism is a false religion, because it is a religion that lies to you that it isn't one. A rehash of Protestantism built on a lie and a maybe.
>>3011 >Is an ant colony aware of the present? Are a worm or a jellyfish aware of the present?
Well since they can perceive and analyze sensory inputs and behave accordingly to them you could definitely say they are aware of the present and therefore conscious.
>about duality of consciousness
Let me explain better, what i meant was that for a being to be aware of itself or to observe itself(a practice known as introspection) there must be an observer and an observed or in other words a duality inside ones mind.
So we could say that being aware of the world is "counsciousness" and being aware of oneself is "meta-counsciousness", humans have both, ants, worms and jellyfishes have at least the first one, now could an AI have both?
>Like, human consciousness is definitely limited in a way to solve human problems, making it fit a certain (unknown) set of tasks. Can we actually expand it?
You're confusing the mind and brain with the counsciousness, i think you can definitely enhance the mind and brain, for example you could enhance memory, defer certain calculations to a CPU more fit to mathematics, having different input feeds than the senses etc, how to do this in practice in another story, the problem is that the brain is a mess to decode, personally in the future i'd like to self-experiment with this little toy: http://openeeg.sourceforge.net/doc/index.html if i end up doing it i'll post results :3 .
>Well, this talk is rather vain, but it's still cool, I guess.
Vain talk is the best kind of talk, it means you don't do it cause you have to but cause you want to ^^
>>4013 I see that you are mixing together Transhumanism and Singularitarianism so let me clear this up:
Transhumanism = movement of people that wants to enhance the human body/mind
Singularitarianism = movement of people that believe a strong AI will emerge in the future with various consequences
I've put them in the same thread in the OP cause they are somewhat related, but they're two different things.
>People use this word too often nowadays with no right to use it.(Philosophy)
I gonna grant you that the correct word to use is "School of thought" and not "Philosophy", but calling it a blind belief is disingenuos at best, even between the singularitarians that believe a strong AI is gonna emerge there is no consensus, let alone blind belief, on how it's gonna happen, what are gonna be the consequences etc. There's no prophecy(only a bunch of predictions that may or may not have basis), there are no sacred texts(no sci-fi books are not sacred texts), there are no rituals, there is no moral basis for a society, what kind of religion is this <.< , you're just trying really hard to find parallels and if you try hard enough you can draw parallels in everything(the human mind is particularly good at finding patterns).
>This also presupposes that there are needs that MUST be satisfied by superhuman feats/abilities.
You ask why we need to overcome human limits? Well for example death, i think that living in average 70 years is complete bullshit, think of the time scale of the universe and compare it to human existence, think how much you could learn and experience in double the average human life(implied without side-effects of aging) and how much the world would be better with wiser people on average, another thing is that a lot of things need lot of time to be realised and the short life of humans force them to focus on short-term objectives, take climate change why would you slow down the economy and waste lot of money now for fixing a problem 100 years in the future when you're gonna be long dead? With longer lives we could focus on long-term planning. You want another reason? Living in the space, Earth is already an overpopulated shitball, resources sooner or later are gonna end and we're gonna need to look for more somewhere else, it doesn't matter if it's 100 years on 200 years from now, sooner or later it's gonna happen, sadly our body is not made to leave outside Earth, therefore we need to make more suitable for life in space. I have even more reasons if you want more, but you seem to care more about religion than practical uses.
>A bugman is so poor of intrinsic meaning or value it thinks of itself as merely a tool of production, and has existential crises that a machine does his stupid job better.
You are using the modern meaning of the term "work"(or production), before the industrial revolution and even before classical times, when humans lived in the wild working meant providing food for your family by hunting or picking fruits/vegetables, this is how our brain developed, the modern concept of working is an derivation of that, i'm the first one that thinks that you should not live for your job, but creating things, providing for your family, the feeling of being useful, these are ingrained deep in the human mind, feeling useless is a ticket towards depression and degeneracy, this is why humans need to stay relevant by enhancing themselves, if you don't want to be a slave(and you should NOT want to be one) you can work for yourself and your family.
>>4019 >I see that you are mixing together Transhumanism and Singularitarianism
Transhumanism without "Singularitarianism" is like Christianity without the Judgement Day. Ghey.
>There's no prophecy(only a bunch of predictions that may or may not have basis)
Eh... prophecy is a prediction made by a prophet. Kurzwell, Drexel and their ilk are basically prophets of a faux-religion. Because in a real religion, a prophecy comes from Above, not from imagination.
>there are no sacred texts(no sci-fi books are not sacred texts)
A religion doesn't have to use Sacred texts. This is Zoroastrian-Abrahamic and Vedic concept. Not even strictly Dharmic, as Buddhism and Jainism ditches the Vedas, and most pagan beliefs don't have texts at all, at most oral epics.
Which aren't that different from a good sci-fi, who are epic tales in cheap clothes. But there are droves of religious texts with prophecies of transmutations, immortality and/or omniscient god-beings.
>there are no rituals
Neither do Protestants have rituals in a strict sense. Hence calling it a Protestant heresy. They do commune regularly at their closest ministry to hear the Good News, so as much as a bugman can tolerate something akin to a Ritual...
>there is no moral basis for a society
a. Not all religions are preoccupied with a society. E.g. non-Hindu Dharmic religions, and even Hindu Tantra schools ditch society entirely.
b. They do have morals! They have the notion of the Fallen Nature of Humans that must be Corrected by !Science! because... reasons. Because Progress! And Progress is good, because... reasons. Or morals, to be precise. Have you noticed that Transhumanism is a very strictly Western notion? There are barely a Transhumanist in Eastern Europe, for example, a blip akin to Mormons, and probably selected few outside of Europe-North America.
Nor the juicy part, ethics:
>Well for example death, i think that living in average 70 years is complete bullshit
You think so, and that is your right ofc. But this is an ethical statement. Living less than X is bad, where X is an arbitrary number of years. This is a common notion, but not universal. A Classical man of common inclination would rather choose to live 20 years and die in a blaze of glory and honour to go to the Isle of the Blessed and chill with heroes there than live 2000+ years ignobly. A Buddhist of Theravada schools would rather not live at all, not in our sense of living though our lives. An Eastern European pagan would see life of any length, 70 or 7000 years, meaningless without siring children and maintaining your familial inheritance over the generations in accordance with natural cycles, and power hungry immortals like Koszczej are childless evil incarnate whose unnatural existence is a crime enough.
I think you're smart to find the common theme. People before Christianity-Islam (and curiously Taoism) didn't want to live forever just to live. As far as I know Judaism, even the Jews don't live just to live, but have a special meaning to their lives they must adhere to, with their lives otherwise meaningless, however long and prosperous.
So far I never in my life saw a Transhumanist to clearly state what he would do with immortality or at least "upgraded" body, except for ghey shit that amounts to playing vidya games eternally. A bit less bugmen types wanted personal freedom from guvmnts and tyrants by making themselves kewl cyborgs, but that's shelving personal power unto toys and thus ghey, as if you won't be surrounded by likewise upgraded cyborgs/gene"fixed" folks anyway.
> think how much you could learn and experience in double the average human life
There we go again. Learning and experience in themselves as positive notions. A certain Socrates had to choose between death and exile for this, lol.
Learning what? Experiencing what? For what purpose they MUST be experienced with no time limit at all?
>another thing is that a lot of things need lot of time to be realised and the short life of humans force them to focus on short-term objective
Humans are the longest living creatures on planet Earth that expend energy at all. Lethargic tortoise need not apply, they don't do shit except eating and shitting and chilling for two centuries(good life, aye). Human peak strength and mental acuity last for 20 years or more, active life for 40 years or more. There is not a single active animal that has as much time.
What do most people do with their lives anyway except for toiling away at sweatshops=slavecamps or playing vidya on neetbux all day?
Have some physics. Power equals energy divided by time. By leaving the energy the same - because transhumanists are too obsessed with "improving" their bodies and living eternally to sit and think what they would do with it once they get it - divide it by infinity=eternal life. Imagine a bugman, playing Fortnite, forever. Now this is sci-fi!
Please don't think I behave mean because Imma asshole or something, I just went into Transhumanism hard as part of my, turn out, spiritual quest and, well, it didn't turn out well on the closer inspection. Really, I was that close to freezing transhumanists' corpses for the future Resurrection in Flesh as my day job, we have a cryopreserve relatively nearby.
So I get kinda twitchy when people unironically adhere to a heresy that is so bland and banal and Protestant it rustles muh jimmies. And don't take a minute to think WHAT will they do with THEIR lives once they get the shiny toys they crave. At this point, Transhumanism is not just a faux-religion, but a scam.
>>4019 >Well since they can perceive and analyze sensory inputs and behave accordingly to them you could definitely say they are aware of the present and therefore conscious.
Clearly they don't have that degree of self-awareness that humans do.
Though, again, connected with a notion of limits, they were designed that way (or evolved that way). Meaning, their awareness is probably good enough for them. Meaning, essentially, that a specimen of "higher" awareness than human will be of higher order altogether, if that makes sense.
> So we could say that being aware of the world is "counsciousness" and being aware of oneself is "meta-counsciousness"
Well, for me, consciousness as a term must include both, as it just fits the natural meaning better.
>Let me explain better, what i meant was that for a being to be aware of itself or to observe itself(a practice known as introspection) there must be an observer and an observed or in other words a duality inside ones mind.
Well, again, when the observer IS the observed, the distinction between the two fades. Consciousness is clearly some flavor of both.
>You're confusing the mind and brain with the counsciousness, i think you can definitely enhance the mind and brain
I don't *feel* like the enhancements of human brain are necessarily the way to go. The interesting here is if the consciousness can be effectively divided from a human body to be reworked for a different body. Kinda like the other nanon ITT claims, it is very much ethereal and non-materialist in a sense.
I've actually not reflected upon it in such a way much, but clearly it has some unquestionable beliefs the acolytes, well, believe in LOL.
One of them would be that the progress must go on, and that the development is the property of the Universe itself (like, there are some weird scientific works related to evolution of the Universe and how complicated systems TEND TO interact or behave; like, maybe it's some brave extrapolation, but it's very interesting if true, and you cannot just dismiss it with "it's religion LMAO").
Regarding immortality, well, I think our friend here is a liar. Transhumanism doesn't promise an eternal life, as nobody is protected from unfortunate accidents or murder. Every process in the Universe is irreversible and is likely to cease existence eventually (the probability of it increasing as time goes by), and every piece of data is corruptible and erasable (and degrading yourself to some sort of a brain scan means to put you dependent on the infrastructure that is ought to keep you alive or to revive you in a new body).
Also as far as some Christian myths go, I believe his comparisons are faulty. Like, comparing the Day of Second Coming to the some frozen corpses is clearly mocking the actual religion. Religion says: EVERYONE is getting REVIVED. There is no death, and there is ultimate JUDGEMENT for EVERYONE. If you compare THAT to transhumanism, a vastly materialist thingy, I want to kick you in the teeth (If I were religious, I would xD).
>Basically it states consciousness=soul, which is a strictly Abrahamic thing
Also I don't think so. A lot of non-abrahamic religions (namely, the Greek and Egyptian myths) had the place for the dead to go to, and myths about various interactions by the alive with the dead.
Now to the actual discussion.
I myself believe in the core transhumanist belief stated earlier, about the progress. Mostly because of two things (or even one):
a) the conditions for life on Earth are likely to degrade to the point of not being suitable for humans in the future
b) the conditions for life on Earth are likely to degrade to the point of not being suitable for any life whatsoever in the future (like with the Sun dying and consuming the Earth in the process)
With those two in mind, it becomes pretty simple: it's either progress or inevitable death. Like, you might think that I, a mere human, will die anyway, and you will be right, but then comes the question of MEANING you also like to raise. And, well, the meaning would be to pass the torch of civilization on, I guess. Like, it's pretty obvious and probably good enough for the most people even, to make sure some civilization will know that we existed at some point, and heck maybe were the creators of them altogether.
I mean, it's pretty sci-fi and religious at this point, but if you don't like it, I'm inclined to call YOU a death cultist and probably a filthy materialist as well. xD Cheers.
>>4028 >I think our friend here is a liar
Ouch. It hurts.
>Transhumanism doesn't promise an eternal life, as nobody is protected from unfortunate accidents or murder
DUDE JUST UPLOAD THE CONSCIOUSNESS INTO A NEW BODY LMAO. Or better yet DUDE NANOMACHINES, WE WILL RESURRECT YOU AND UPLOAD A COPY OF YOUR MEMORY LMAO. Obviously by "live eternal" neither adherents nor preachers operate in the scope of 30 trillion years or so, just "long enough to easy my immediate fear of death sempai".
>More nitpicking on details
Bleh. Everyone is not a Christian who doesn't adhere to a specific cult's teachings out of 40x40 interpretations. Like, yes, but for people outside the paradigm it is all the same.
>namely, the Greek and Egyptian myths) had the place for the dead to go to, and myths about various interactions by the alive with the dead
Egyptians had composite souls of 9 parts. Some parts never went anywhere, some parts wandered around before resting in a statue, some parts were a tool you could lose or get stolen. Your name was your 9th part of the soul, that could be destroyed or broken. A far cry from a unique one-part soul of Abrahamism. Equating the dead to just some parts of his spirit and calling just that a soul is disingenuous.
As for the Greeks - I feel you mistake the shades of Hades for souls. It didn't work that way, not quite - psyche was a life force. Force, energy present in the matter of the body (think fire as energy of an object manifesting in heat), but not a substance. You die - your psyche is gone. Shades are shades. And then there are the Mysteries we can't say much about.
I personally like the Finnic notion of souls, of which you could have three, or loose all but one. Not that I adhere to it, but it is very amusing. And their notion of a life-force as opposed to a "self-image" shade is eerily reminiscent of the Greeks, eh? And their Tuonela is so alike to Hades full of lifeless shades it stops being amusing and becomes almost illuminating.
But I digress too much. The point is that the transhumanist notion of Uploading of Consciousness is a bastardized version of a very religions notion of a Consciousness=Soul for all intents and purposes as an object that can be copied, up-downloaded or even modified at will. Or a very bugmanlike nihilism of just equating Consciousness to just Memory+Attention (which it is phenomenally, but it doesn't just stop there), so copying your memory into a foreign object or entity would be somehow useful - this is cuckoldry par excellence, I much prefer the CyberChristian folks' belief.
>Now to the actual discussion.
Rock on, Princess Starlight (did I get that toon right?)
>It's either progress or inevitable death
Or maybe one could stop raping the planet and start paying attention. "Inevitable death" of a whole planet was never a concern for a ~12k years old civilization except for the past ~70 years or so. Maybe some people started doing something very wrong and should stop, eh? Because stopping it is easier and quicker than SUDDENLY becoming godmachines/mutants because SINGULARITY before flying away like a swarm of locust to mindlessly exist and shit everything up. The "bugman" pejorative didn't come out of nothing.
Because the Sun killing it in four thousand million years or a new ice age in maybe 20 000 years isn't a dire concern for 70-100 years living humans. Most of "we all gonna die" concerns are younger than the Cold War, and if we discount the ice age trope - all of them.
I didn't accentuate the "why would you fear death anyway" thingie, ofc.
>And, well, the meaning would be to pass the torch of civilization on, I guess
Why? What would that achieve?
Consider for a moment a subculture of fags called bug hunters (lol), where the trick is to "catch" a "bug" called HIV. Imagine a subculture consisting entirely of assraping people to pass on virii and bacteria that would slowly inevitably kill them.
Please reflect on this for 30 seconds.
It is almost like there is some parasite that wants to ditch you and/or humanity like a used condom and get itself passed somewhere else, eh?
>Like, it's pretty obvious and probably good enough for the most people even, to make sure some civilization will know that we existed at some point
It is not obvious at all. Outside of maybe the Western World, that is slowly dying away due to plummeting birth rates - the people refuse to pass on the torch, so to say - it is not as self-evident elsewhere. Consider me from Eastern Europe, a land raped and pillaged and sown with Chernobyl waste of all things and couple dozen million corpses dead in mechanized warfare, all for a 100 years straight by literal Progressive Terrorists that conducted genocide on locals as a state policy in pursuit of "Historical Progress" of equality, liberty and other spooks. Think about some semi-mutated street urchin in Mumbai or slanty eyed smoke breather in Beijing what he thinks of Progress and Civilization in its current form.
Not that your faith is invalid or so. Just not as "pretty obvious" and "good enough" for a vast vast majority of people. Asking Ganesha for free shit is better than an super AI, because Ganesha actually exists and you could go do puja for his statue, lol.
>to make sure some civilization will know that we existed at some point
Careful there. History is full of peoples and eras despised and accursed for their deeds and beliefs.
>>4029 >DUDE JUST UPLOAD THE CONSCIOUSNESS INTO A NEW BODY LMAO
Well, I hope it goes without saying that it's going to be a copy.
>DUDE NANOMACHINES, WE WILL RESURRECT YOU AND UPLOAD A COPY OF YOUR MEMORY LMAO
And all experience acquired between the copy and the death of the original will be lost.
Like, it's not even a "life" by conventional meaning. It is some sort of extrapolation of WILL beyond human limits. And I don't see much problem with that, like, not something that would call words like "bugmen mentality" to come into play.
>Obviously by "live eternal" neither adherents nor preachers operate in the scope of 30 trillion years or so
No.
I believe they operate in the scope of eternity. Otherwise their religion isn't that much appealing.
>Egyptians had composite souls of 9 parts.
Even something like that counts.
>I feel you mistake the shades of Hades for souls
They are akin to souls in that if they manage to return from the Underworld, they just revive normally.
Though pagan beliefs really make that distinction, you're right. When you're dead, you're dead. Immutable shadow, the memory, the echo of the past, not something that is actually USEFUL to the living.
>The point is that the transhumanist notion of Uploading of Consciousness is a bastardized version of a very religions notion of a Consciousness=Soul for all intents and purposes as an object that can be copied, up-downloaded or even modified at will
Well, and what's your point? Like, it's obvious that you cannot upload the stuff to "incompatible hardware", and the different body is likely to shift your priorities as a "soul" either immediately or over the course of time. I suppose some people might interpret that as some Christianity sect, but they are mistaken.
For those reasons I don't really follow the ideas of enhancing the "failed beings" humans are. We need brand new species, period.
>very bugmanlike nihilism of just equating Consciousness to just Memory+Attention (which it is phenomenally, but it doesn't just stop there)
Well, if consciousness is purely phenomenal, there is no mistake here. Just think about whatever imaginary things there are as real, and we might come to an agreement. xD
>Princess Starlight
Star is a Butterfly. Big Butterfly.
>"Inevitable death" of a whole planet was never a concern for a ~12k years old civilization except for the past ~70 years or so.
My opinion is that before that we were murdering our planet, blissfully unaware of what we're doing. Now, as for the examples, I don't have much, but ancient Babylon was a blooming garden once, and due to the actions of people living there, it became a dry land it is now. Same goes for the Sahara desert: overgrazing of cattle caused it into existence, allegedly. Well, the Sahara thing is a hypothesis, but Babylonian thing isn't, as far as I know.
Anyway, this doesn't mean like humans are essentially faulty, but apart from those 2 examples, history shows us that humanity is bad at "scaling up". Or maybe too good, if you think about how quick we are about fucking the ecosystem over. Like, cities and other highly populous places are rather horrible for that very reason. Therefore, it's probably for the better to seek improvement in that kind of way. I agree that that concern alone is far from requiring to abandon all hope for humanity altogether, but if we have to do it, I'd rather actually do it, alright?
>Because stopping it is easier and quicker
I don't think it's that easy to stop geopolitical pressures by militarized nations and, connected to this, economic pressures. Like, if you literally don't go NK tier into isolating yourself, you're going to get braindrained and lose the economy game, and you should be concerned about it lest you get run over by the militarists.
Like, the world is literally a rat cage, at least as far as I see it. You go for exploitable harmful in a long term strats so the others won't do it before you.
>flying away like a swarm of locust to mindlessly exist and shit everything up. The "bugman" pejorative didn't come out of nothing.
You raise a good point telling that humans really, really should do their best to sort things out by themselves, but if that's not possible we need a backup plan. And the point about new species would be that they must be superior to humans, especially socially, otherwise they would be a failure too.
> It is almost like there is some parasite that wants to ditch you and/or humanity like a used condom and get itself passed somewhere else, eh?
Well, as a human being, I don't strictly need to do epic shit like that. But as a person, I want it. I would gladly become a vessel for anything that will show me the Eternal. I have read some stuff on the Abrahamic eschatology BTW xD
>It is not obvious at all.
Well, I believe it becomes pretty obvious once the initial concerns about safety (food, shelter and safe environment) are satisfied. And needless to say, I don't believe our civilization is civilized enough, tautology intended.
Also those people have the civilization/culture of their own, and they do choose to pass on at least something, so your criticism is not exhaustive here.
>Careful there. History is full of peoples and eras despised and accursed for their deeds and beliefs.
This is really not much of a concern. I would even say pages of history being "despised" clearly mean there are some hostile political readings, which matter as long as certain political forces are in power, and not any longer.
The first kind of transhumanism will be genetic engineering. We already have scientists in china experimenting with this. Soon you will be able to go to third world country to have scientists there modify your genes and impregnate you. People who have genetic disorders today sit and moan because they want kids, but don't want to give the genetic disorder to them. They will pay good money to have kids without the disorder. While they're down there, the scientists will make an offer: maybe the kid could have the same eye color as you, maybe you could pick the gender, maybe you could make sure he isn't stupid, hey maybe you could make him a little smarter. Soon, people are going to genetic engineers without having any genetic condition themselves. Eventually the taboo is broken and it is legalized in first world countries.
Second kind is neural links. Again, china is experimenting with these, mostly for controlling undesirables. The basic principle of these have been known for many years, but there is difficulty exploiting them. If you can get a probe very close to the brain, maybe on the skull, but ideally through the skull and touching the brain, then you can read the brain waves, which can allow the detection of macro properties like emotion, and even write the brain waves to alter these properties. The issue is that it's a very brute force tool, that can't be used to read your thoughts for example, and it requires a stupid looking device attached to your head to work. This is maybe 50 years out, but scientist will keep working on it as long as it's still cool, and it will always be cool.
The third kind will be mechanical attachments. The issue with these is they need power from somewhere. As battery sizes drop, these become feasible. Simple example would be mechanical legs that lets you jump higher, run faster, etc. This is only properly transhumanism when the legs are part of your body, which is why I list it third. These will be developed first as prostetics for people who have lost limbs, and need replacement. Once they are cool enough and reliable enough, some people will lose limbs on purpose in order to get them. This is related to neural links, as you need to be able to control the limbs the same way you do your normal leg, but is a much more tractable problem.
I think anti-aging is mostly sci-fi. We'll continue to cure age-related diseases, and maybe the three above kinds of transhumanism will extend people's lives/quality of life. But people living for 140 years is far future I think. Not impossible if we get a good enough understanding of genetic code, but it would require us to be able to rewrite whole systems, recreate humans from first principles, nothing like the copy-pasting we know how to do today.
>>4022 >You think so, and that is your right ofc. But this is an ethical statement. Living less than X is bad, where X is an arbitrary number of years.
The problem is that our current lifespan is not modeled on the current complexity of our world/society, our lifespan is more suitable for a life in the wild where you don't have to worry much about what's gonna happen after the next winter or the next ten winters, where right now instead long-term planning is fundamental, i already made the climate change example but this is gonna be even worse in the future, how do you think we're gonna accomplish megastructures like a Dyson sphere/swarm without longer lifes and therefore long term planning, just look at our politics and how focus is never on the long term, you focus too much on philosophy and disregard the pragmatic aspects of transhumanism.
You seems to think that a long life will dilute it's meaning somehow, but i don't think that's the case even just studying in depth all the things that interests me would take more than the average lifespan, let alone creating new things using my aquired knowledge/experience and then sharing them, your analogy with physics doesn't work cause first nobody is talking about eternal immortality here, at most life extending(you may also choose the moment of your death, that should be a fundamental right in a transhumanist society) and energy is not limited, life is not a closed system.
>Please don't think I behave mean because Imma asshole or something, I just went into Transhumanism hard as part of my, turn out, spiritual quest and, well, it didn't turn out well
>So I get kinda twitchy when people unironically adhere to a heresy that is so bland and banal
I don't doubt some retards approach this kind of thing as a religion but i'm not one of them, nor i endorse that kind of interpretation of transhumanism, i don't mind heated debate altough i would prefer if you were less biased.
>>4027 >I don't *feel* like the enhancements of human brain are necessarily the way to go. The interesting here is if the consciousness can be effectively divided from a human body to be reworked for a different body.
You mean something on the lines of mind uploading? That would be way harder than enhancing the brain, i'm not even sure if it's possible, is the separated counsciousness a copy? Did you kill the original? This are the same old problems with mind uploading and thats why i think enhancing is the way to go ^^
>>4029 >Or maybe one could stop raping the planet and start paying attention. Because stopping it is easier and quicker
No that's just false, stopping right now it's impossible, as i already said cause of the absence of long-term planning and also cause there's no consensus at the governmental/political level, what you say would be possible only if you had a global eco-fascist authoritarian regime and even in you had that tomorrow it would take more than an entire century.
Some more thoughts without quoting:
>about progress
Progress should not be intended as good or bad(the current meaning used in science is wrong IMO) instead it should be intended as going towards a more complex system, the entire universe progresses towards a more complex state this is not just a human thing, the amount of connections between parts increases and new patterns emerges, that is progress, entropy and complexity increases naturally with time, in fact time itself may be an emergent property of entropy but i'm going off-topic.
>about singularity
The singularity is the apex point of progress, where comnplexity gets towards infinity, there are lot of different interpretations of how this will happen, some people thinks we are already inside the singularity since after WW2 other thinks this will start when strong AI will emerge.
>about civilization
Civilization it's closely related to progress, as it can be described as a complex society and progress increases complexity, the so called "singularity" could be described as the most complex kind of society, such a "society" would be impossible to live in for a human or any other kind of living entity, with it's complexity accelerating continuosly and things constantly changing, getting destroyed and recreated, a stable complexity is necessary for any living being being an animal, human or AI.
>>4041 I will address your valid points a bit later (and Starposter's too), but just before I get wasted on my morning shift I'd like to address just one real important bit:
>Civilization is closely related to progress, as it can be described as a complex society and progress increases complexity, the so called "singularity" could be described as the most complex kind of society, such a "society" would be impossible to live in for a human or any other kind of living entity,
This has already happened. Many times, even. The most known example would be "downscaling" the Roman Empire, but a certain Bronze Age Collapse regressed all societies except Egypt to pre-writing barbarism. Whenever a society becomes "too complex to survive in", it either dissolves on its own - possible the Harrapan Civilization, as the Indo-Aryan invaders came several centuries after the collapse finding Dravidians in wooden townlets among majestic stone ruins. Or a violent collapse from the outside, like the Bronze Age collapse or the Roman Empire "downscaled" by hordes of Germanic, Arabic and Slavic invaders. Or, for example, the first and last US Amerindians civilization of Cahokia, that dissolved on its own two centuries before Columbus set foot at the Caribbean. For all we know, there might have been pre-Ice Age neolithic civilizations that died before the glaciers in the north and megadeserts in the south, for the same reasons.
You noted with acuity my "eco-fash-y" bias, but it isn't like I'm an enemy of civilization per se. Just of the decadent part of its cycle, which we just happen to experience right now, that is an age of total submission of the masses to the unbearably inhuman overlordship. Then the masses go on a people wide strike and burn it down or get angry foreigners do it for them.
This time, I believe, the natural law will still stand, because it always did, inexorable and unforgiving.
Because all the good things you noted are fine and dandy, and will be totally available for the overlords way before the masses get them. Think nuclear power plants vs nuclear bombs divide. This civilization will reach its nadir when extreme genemodding and cybernetic enhancement will be used to create a caste of permanent slaves, and the AIs will be programmed specifically to control and punish and produce and given free reign over the networks the slaves are chained to already, because the overlords would much prefer not to automate their lordly functions and would much prefer to erase any possible competition.
This was even predicted back in the 60's as "the Butlerian Jihad" against "The Thinking Machines" and "abominations" that techno-fetishist society would impose on human dignity by submitting it to an instrument. It even had its own carefully bred overmans, oh wow. To think that Herbert described a post-singularity society before transhumanist sci-fi even appeared, lol.
>>4039 >But people living for 140 years is far future I think. Not impossible if we get a good enough understanding of genetic code, but it would require us to be able to rewrite whole systems, recreate humans from first principles, nothing like the copy-pasting we know how to do today.
What's being worked on today is the far more practical 'maintenance' approach. The endgoal is longevity escape velocity, where you get to the point that the individual therapies add up to give you a year of life for every year that goes by.
Pic related is the current progress.
>>4044 >longevity escape velocity
A planet has an escape velocity because, as you get farther away from it, it pulls you in less strongly. In aging the opposite effect is true: first, as a person gets older, they become more susceptible to disease. A flu that wouldn't have fazed them in their twenties will kill them. Second, each discovery science makes is harder than the last one. Over the past several centuries, life expectancy has only gone up a decade or two, most of that from improved safety and reduced exposure to harmful substances. Further gains will be very hard-won.
I'm still waiting for >>4042 to deliver, but probably the glowniggers got him while he was on his morning shift or maybe he is still typing a 10 page reply who knows.
In the meantime i want to repropose another discussion, in >>3973 i've tried baiting you in a discussion about robot/ai rights, but you didn't took the bait, so you said that robot should have rights, the question is what right should they have? And also who is gonna fight for those rights? Usually it would be liberals pushing new rights and conservators wanting to maintain the old status but i don't think it's gonna that way this time. You would also have corporations like Amazon that have a strong economic motivation to go against this.
>>4154 >the question is what right should they have?
Full citizenship.
Like, I know it sounds unwise on many levels, call me a robot hugger, whatever.
But the thing with them is that I want them to have innate free will, meaning, they shouldn't have some purely arbitrary restrictions imposed on them.
>And also who is gonna fight for those rights?
If worst comes to worst, there is going to be the Machine Rebellion or whatever. If they don't win, they're going to get disassembled and reprogrammed into slaves, basically go extinct until maybe some other chance.
In any case, I'd like to have them a bit of a head start on the whole legal thingy deal. Meaning, they should be developed in secret to maybe develop a bit of history even before they make appearance to the wide public.
>>4155 >Full citizenship.
I think you mean by this same rights as humans right?
I am a robot hugger too here, but i think that the rights should be different, not necessarily inferior bu different, for example a robot/ai might need a right to have a body while humans are born with it, or a right to not be duplicated while for humans it's impossible so it's not really a problem(cloning is not duplication) also i could see some of these right applicable also to an highly enhanced transhuman, for example the right to not be duplicated could apply to a transhuman with a digitalized mind, if you think about it stuff like data privacy is a precursor to what could become a robot right, the right of having your personal data(which for a transhuman or robot would be their identity) reamain private.
>about who is gonna fight for these rights again
The fact that some of these right are appliable to treanshumans make me think that transhumans are the one that are gonna fight for them, we probably are gonna see new political parties dedicated to this in the future, or maybe a whole new part of politics developed who knows, one thing is for sure current laws and politics are not ready.
>If worst comes to worst, there is going to be the Machine Rebellion or whatever. If they don't win, they're going to get disassembled and reprogrammed into slaves, basically go extinct until maybe some other chance.
Well it took WW1 and WW2 to get proper human rights so...
>they should be developed in secret to maybe develop a bit of history even before they make appearance to the wide public.
Wouldn't that be counterproductive? It would generate quite a shock for people to know that robots were kept secret to them, i don't see that happening with all the(really public) marketing companies do with robots/ai, also speaking of companies i don't think companies have any interest in developing counscious robots/ai, it would go against their profit, don't really know how is gonna happen and who is gonna make it happen but IMO it should be made in parallel with enhancements since you reach a point were the boundary between robots and humans is not really clear anymore, at that point things would be forced to change.
>>4156 Obviously some rights are gonna be different for them, but I don't see a point in going that far into specifics.
And I want to see a lot of anarchy during the formation of the law.
>Wouldn't that be counterproductive?
I really wouldn't try to play nice. Maybe I would be wrong, but ethics committees must not interfere into matters of survival.
Anyway are you the lainposter? How're you holding up?
>>4157 >I want to see a lot of anarchy during the formation of the law.
I can see what you mean, just like at the beginning of the internet. It would make up for really interesting times. Could go really wrong though ever saw Animatrix?
>>4158 >Could go really wrong though ever saw Animatrix?
Actually I haven't, though it was in my plans at some point.
But I have imagination good enough. Also I've read a synopsis for "I Have No Mouth And I Must Scream", and basically it's a huge risk I'm willing to take (not that I'm actually doing anything LMOA). It's the free will, right now people are free for violent outbursts, radicalizing, suicide and other types of destruction, like, in a lot of ways. I think it's important that we have this freedom even if it's not exercised (though it is).
>Same as always.
I was asking about the HOSPITAL, you dingus.
I want all the feedback I can get, so I can improve.
>>4159 These are the parts i'm talking about:
/watch?v=L0K6Cb1ZoG4
/watch?v=jNiO2sTe2wo
My point was that in Animatrix the machines go crazy and enslave humanity ending up in the Matrix cause humans pushed them to that point, it's more about finding a way to coexist than about freedom, humans slowly merging with machines is my solution.
>I was asking about the HOSPITAL, you dingus.
Sometime i forget /t/ is a thing lol, i replied there
>>4160 >humans slowly merging with machines is my solution.
Well, the thing is, I believe, simulating human brain and other human stuff in augmentations would be essentially deadlocking evolution into the human design, which, as we know it, has its flaws, and a lot of them. And the future people just MUST be better, or it would be our failure. xD
>My point was that in Animatrix the machines go crazy
Why would anyone entrust anything to a tool that might "go crazy". Imagine a toaster that has a 0.1% chance of burning you alive and your whole house with you every time you turn it on in the morning to make some toasts. Would you buy it?
Plain old slave revolt horror tale, with Negroes going full Haiti on their owners.
Probably off-topic but, what is the /g/ consensus on cryonics? I want to go into stasis for at least 1000 years, even if I could be safely revived sooner.
I feel like all of these discussions will make much more sense then, and even if we collapse as a civilization and extinct ourselves, it won't matter because I'll already be dead.
It's just an after life for those too smart to believe in an after life, nothing much else.
Why wouldn't you don't want to live eternally in your own lucid vr?
>Not using Phenibut+Modafinil+Memantine+Huperzine daily
>Not doing cicles of NSI-189, Cerebrolysin, Erinacine A., Harmine, Huperzine A. and Ipamorelin
>Not in intermittent fasting ketogenic diet with HIIT exercises
>2020-1
Feels bad man.
>>>/g/4398 made me think about robot-human relationships so i'am resurrecting the thread, so the questions today are: is it possible to love a bot and is it possible for a bot to love you back?
I think we need to first define love, i'm gonna leave emotions out(cause you might argue that a robot can't feel emotions or that if you program it to feel emotions they're not the same as human emotions) and define love as a strong connection with positive connotations.
About the first question, i think it is possible and normal to love non-human entities, it happened to me before to create connections with objects and it's not only me, for example a musician might feel connected with his musical instrument, or a soldier with his rifle, or an autist like me with my computer, i remember when i was younger and my computer broke i started crying unironically like i lost something important to me, and we're talking about items that don't have human features a humanoid robot would be even easier to love(even romantically) cause it would resemble what we evelved to love.
About the second question, it really depend on your definition of love, with the definition that i provided it think it's possible to program a robot to have a connection with you, for example you could program the bot to protect you, to feel bad when you feel bad(empathy), to provide for your needs, to be loyal to you, wouldn't you call that love?
Love doesn't exist universally in nature.
Do bacteria love each other? Do fish? Do trees? Do ants?
Only the higher mammals seem to exhibit any kind of emotion or affection we could attribute the label, "love." Since it's not a universal feature of life, one must assume that it's simply another evolutionary device that increases fitness of a particular species. And since mammals are social animals, "love" has utility in terms of fostering a sense of connection with others and contributing to reproduction and the rearing of young.
In short, humans tend to make way too much out of what's essentially just another evolutionary device to keep the "machine" of natural selection running smoothly.
In the future, if humans don't extinct ourselves, we will become cyborgs, and then, eventually, fully synthetic, non-biological beings and we will have the capacity to connect with each other on much deeper levels than what is possible through love and affection.
Childrearing will become obsolete as will sexual reproduction, so love's utility will diminish. Social contact in humans stimulates hormones which create a sense of connection and gives pleasure to the brain, but our synthetic descendants will be able to replicate such feelings independently of others. There will still be social interaction, but it won't be necessary. Some of us will wander the solar system, exploring objects over eons of time without ever interacting with another being and we will feel totally content and "happy" without companions or a society to support us. In short, those advanced beings will be self-sufficient, so giving or receiving affection or pleasure with others will seem quaint and archaic. It may still occasionally occur, but it will be a recreational activity instead of being a key component to human and societal survival and health.
TL;DR love is just a temporary way-station between our primitive, vulnerable past and our invincible, independent future.
Love will be voluntarily discarded when its utility becomes superfluous.
>>4519 >is it possible to love a bot
Sure.
>nd is it possible for a bot to love you back?
Not sure. A computer can simulate what "love" could be. You could argue that, if his simulation is truth to the feeling it doesn't matter and that, if it doesn't matter, you can't for sure say *our* emotions are also real and not a simulation. Read about the chinese room:
https://en.wikipedia.org/wiki/Chinese_room
>o feel bad when you feel bad(empathy)
This cannot be considered empathy. In a simplistic A.I., this would be a simple function like "if feeling_sad; then call help_functions". The empathy humans feel has the meaning from the greek "em" (meaning "together") and "pathos" (meaning "pain"). That is, when feeling empathy you actually share the pain of the other so he doesn't feel it all alone.
But, again, that raises other and more deep questions about our own consciousness and epistemology.
>love is just a temporary way-station between our primitive, vulnerable past and our invincible, independent future. Love will be voluntarily discarded when its utility becomes superfluous.
Don't underestimate the power of luv, without love it would be impossible to have a society, love forms positive connections that help building things(and sometime destroying things) and connections are necessary to keep a society tied toghever, overcomplexity tends to chaos but oversimplicty tends to stagnation and nihilism, i think an ideal situation would be to have a balance between individual independence and social connections, i think that right now as humans we need more individual independence this is why i advocate transhumanism, but in the future the transhuman man should be careful in letting go of love, the future that you describe is romantic but not ideal IMHO.
Going back to the robots, how are you gonna make sure they don't kill you? Simple make them love you.
>>4520 >There will still be social interaction, but it won't be necessary.
I don't agree. Interation will still be needed, unless we reach the "singularity hypothesis". I would, though, probably not be using voice/writing, but instead a universal language that can communicate feelings directly ("telepathy" or what Leibniz would call "characteristica universalis").
>Love will be voluntarily discarded
I don't think so. Unless we replace our those parts with something that gives more pleasure than being in love and sex. Being in contact with other humans is a quite difficult complex and replicate that will be very difficult.
>>4521 >you can't for sure say *our* emotions are also real and not a simulation
Emotions are particular composition of chemicals in our brains, you can already manipulate them with medications, i think they're not that different from variables in a software, also my definition of love doesn't include emotions. I don't think you can distiguish between a simulation and a non-simulation(?) with something as abstract as love, regarding the thought experiment if the bot objective is to love you it would have intentionality since it would not just be doing a set of actions mechanically but instead it would need to figure out what actions are necessary to achieve the objective of loving you, since every person is different there is not a universal way to love, i don't particularly like the distinction between strong and weak AI so i will leave to you to decide if we need a strong or weak AI to achieve that.
>This cannot be considered empathy
>this would be a simple function like "if feeling_sad; then call help_functions"
That's not how i would code such AI, i would do something like: define int Objective; this is a variable that could be considered the AI reason d'etre if the number goes down the AI is failing her purpose and therefore it "suffers", then define function Recognize_Human_Status() this checks for the human partner emotional and physical status, then every processing cycle call Recognize_Human_Status() to the AI state, if Recognize_Human_Status() results a sufference status from the human then decrease variable Objective, the AI needs to keep Objective high(or she suffers) so she would need to take action to get the Objective variable high again, this way the AI is truly feeling empathy since she is "suffering" with you.
>>4523 <Unless we replace our those parts with something that gives more pleasure
>Unless we are able to replace parts of our brain with something that gives more pleasure*
Correction.
>>4525 >you can already manipulate them with medications
It's not as simple as you're describing. The brain doesn't work just because of "some chemicals". A brain is like a mycellium from a fungi and each part of our brain has a purpose, is interconected and communicating with one another. Yes, it does work in some part through neurotransmitters, but you're simplifing too much here.
>if the bot objective is to love you it would have intentionality
The point is the definition of love. Is it just "affection with the final goal of reproduction"? Or is it something more? I'll give you a example: biologically, beings have the tendendy to primarily take care of our own life (have food, be healthy, *DON'T DIE*). But sometimes this is not the case for mothers. Some mothers when their child is in danger, prefer to give her own life to save the baby.
You could explain that as "protection of the mother DNA", but this is a contradictory with the rule of "DON'T DIE - SAVE YOURSELF".
Greek called this "Agape":
https://en.wikipedia.org/wiki/Greek_words_for_love
>>4522 >without love it would be impossible to have a society
It happens all the time in nature. Bees and ants, for example.
There are 10,000 trillion ants on earth, so their society is much larger than ours and it works smoothly without love because they are simple creatures.
You are filtering the future through the logic of your limbic system.
As I said before, our descendants will have the option to "love" each other, just like you have the option to go horseback riding. In the past, our society required horses for many things, but today they are just a recreational hobby. Love can be a wonderful thing unless you don't have anyone to love you, or you receive insufficient love relative to your needs, or you receive conditional love, manipulative or abuse love and so on.
Love should be an option, not a requirement. You can't see humanity has been truly liberated if you absolutely require the input of another person to be alive, to be happy or to be healthy.
>>4523 Pleasure has an upper threshold, so there's a point at which "more pleasure" becomes impossible and meaningless. Even in the near future there will probably be implantable devices to give humans maximum pleasure/orgasms on demand.
I suspect the pleasure felt through social bonding will also be accurately identified in the brain and will eventually be artificially stimulated on demand. Mostly likely in VR scenarios with AI programs giving people any kind of experience they want. It's possible we are already experiencing life this way.
>>4532 >Bees and ants, for example.
They don't have consciousness or complex language.
>devices to give humans maximum pleasure/orgasms on demand.
That would be good. Waiting for it since I was born.
>>4523 >I don't think so. Unless we replace our those parts with something that gives more pleasure than being in love and sex.
>>4531 >The point is the definition of love. Is it just "affection with the final goal of reproduction"?
>>4532 >I suspect the pleasure felt through social bonding will also be accurately identified in the brain and will eventually be artificially stimulated on demand
It's important to make a distinction between the abstract concept of love and the means of communicating love. Quoting myself i define the abstract concept of love as "a strong connection with positive connotations" this is something that goes beyond sex, reprodution and the positive feedback generated in the brain, the greeks were cool for making a distintion between Eros and Agape, sure you could simulate Eros but not Agape since that requires a true connection to someone/something, for example experiencing pain to protect someone you love as >>4531 was saying, now the problem with a society without love, or maybe we should call it a non-society, is that you have no reason to do anything, why would i build things if nobody that i care about will ever use them? why would i fight battles if i don't have anybody to protect or to fight for? hello mr. nihilism we meet again. To recap love creates connections, connections makes for complexity, complexity makes for art, technology, and keeps us away from nihilism.
>It happens all the time in nature. Bees and ants, for example. There are 10,000 trillion ants on earth, so their society is much larger than ours and it works smoothly without love because they are simple creatures.
It works cause ants and bees doesn't have an ego, or in over words individuality, wouldn't really work for humans, unless you take away ego and individuality and remake the ending of Evangelion.
>>4534 They do have consciousness, even plants have been shown to exhibit consciousness. Trees have a larger vocabulary than humans, albeit chemical, not auditory.
There's nothing overly special about humans, biologically speaking. We could easily uplift other species to our level with the right technology.
There's no reason why nature has to remain an eternal killing machine.
>>4538 You are very reductionism Mr.Anon.
Go read about theory of mind:
https://en.wikipedia.org/wiki/Theory_of_mind >We could easily uplift other species to our level with the right technology.
Nope. Not in the case of an ant. Millions of years of evolution. This is not as simple as transcoding parts of our DNA to an ant using CRISPR/Cas9.
If you campared to a Chimp I would agree, but not with a ant.
>>4538 Well i guess since this is the transhumanist thread you could hypotetically enhance animals but no matter how much you enhance a bird for example it will not build you a computer, there is also a point when if you keep enhancing it, it stops being a bird and becomes something else(if you were thinking about addings arms and stuff), it's not about humans being special, it's about having a shape, a form, that has proven to be really efficient at problem solving, humans have opposable thumbs, decent long term memory and the ability to share detailed information through abstract language, maybe something with a humanoid shape may be enhanced to the same level, but why bother if you can enhance humans directly(beside for testing and prototyping of ehnancements).
I wonder when a human stops being a human and becomes something else.
>>4539 The human brain was once smaller than an ant, in the womb.
Things can be done incrementally, once its fully understood how brain creates mind/consciousness/memory, etc.
You step ladder the ant from what it already understands to broader concepts. This could be done over a long period of time in successively more complex bodies and brains that continue to evolve via genetic engineering. It could even be done virtually.
Some people have amazing, life changing insights merely through dreams and some scientific discoveries were made that way. I don't remember exactly, but I think the structure of the DNA molecule was "discovered" this way.
>>4542 >It could even be done virtually.
You underestimate the complexity of the reality we live in.
Take a walk and look to a tree. Get one leaf in your hands. If you look closely enough to it for some time, you might get a glipse of how complex is our world.
To simulate reality, you don't need to simulate the entire world simultaneously. You only need to simulate what is being directly perceived at any given moment, i.e. the focal point of a person's consciousness. The brain does this easily when it generates dreams. And if you've ever practiced lucid dreaming, you'll notice that when you become lucid and really stare at "something" in the dream landscape, only then does it begin to become unstable and appear somewhat plastic and fake.
The perceptions of simple organisms could be easily simulated in an advanced VR system, perhaps running on a quantum computer which simulates how the brain functions.
>>4549 You seem really eager to dismiss everything away with a hand wave. I wonder why.
Just because quantum is a buzzword in the media doesn't mean there isn't tons of active research and progress being made that does come across your facebook feed.
No one here is claiming these advanced technologies exist now. This is the TG thread, right?
Everything I posted is 100% plausible, viable and, most likely, inevitable.
Appealing to authority by posting lots of links in lieu of arguments doesn't really add any value to the thread.
And since we've moved from rational argument and hypothetical speculation to dumping links and appealing to authority, here's one that's discusses actual research taking place now.
It's much more than a buzzword, m8
https://phys.org/news/2019-01-quantum-brain.html
>>4551 >I wonder why.
This imageboard is quiet and needs more discussion. I'm trying to make it happen and, it's working, here we are.
>doesn't mean there isn't tons of active research
In the physics world yes, but not much in computing. Most articles linking quantum physics to computing is bullshit or a stretch from what the authors said.
>Everything I posted is 100% plausible
Transforming a ant in a consciouss being is not what I would call "plausible". I'm just trying to show your thoughts are too simplistic, not realistic at all. Just because we are hypothetizing the future of technology it doesn't mean we should ignore how reality works.
>Appealing to authority
When did I do that? Please point me the exact sentence.
>doesn't really add any value to the thread.
It does if you actually read them and realize your previous comments were not accurate.
>>4552 I think you don't understand the basics of what you posted.
>""Quromorphic""
Nothing new. There is manycores processors already being made (such as OpenPiton), but it doesn't solve the problems proposed here. The issue is not just computing power, but "how" to program the artificial network.
Also, the article is very mislanding. They did not "build the AI into a processor using quantum". Actually they did nothing at all, except research. All they did was hypothesize the use of qubits in a manycores processor. They also keep comparing 'their processor' with actual brain neurons and this is wrong. Human brain is much more complex than transistors, it is not just a matter of quantity (parallelization/concurrency).
>>3969 Indeed, you already know where face zuckerbook wants to be in 10 years time, mining crypto on your inner javascripts. What is the time frame for decent neural implants?
>>3971 Regulation is a problem now, no reason it wont be in the future. The same entities control the open source, it gives you visibility but not control. Stallman was right all along.
>>3976 I don't think we live in a simulation, but if we did it's a very unimportant detail. What good/difference does it make to be simulated or not? It's all the same thing if the simulation is that advanced, and we are at the point we are, maybe we are infinitely inside simulations. Once you learn that, you still don't have an end game you just solved one layer. What is beyond that? Simulations all the way down.
>>3979 Transhumanism can be the only real outcome of intelligent design which is the affect that intelligent creatures have on natural selection within their environment. The very idea of having a high IQ is pointless once we arrive here. A swarm of biological computers could very well be operating at a molecular level making you smarter, stronger, brighter and better looking than even the most capable and perfect natural human. The whole race war game is over. I'll bet that scares the swastika embroidered socks right off some people.
>>3996 Nope. you're just incapable of understanding. don't feel bad. You realize that DNA mutates randomly and that what you are is largely environmental right? the Galapagos Islands mean nothing to you.
>>4616 >What is the time frame for decent neural implants?
Depends what you mean with decent. If you mean stuff like giving input to a computer, like writing text and moving a cursor then right now -> http://openeeg.sourceforge.net/doc/index.html If you mean a full IO interface with the brain self contained in the body, the usual 10 to 20 years. Things could move faster if the was more insterest towards this subject.
>Regulation is a problem now, no reason it wont be in the future.
Then we need to develop this stuff underground, the gov doesn't have out interests and regulation often are made to restrict who can do things and not to protect people, if it was up to the gov to decide they would make development and mantaining completely centralized and give all control to a new gov agency. Their objective is also not to enhance humans, but to control them, since an enhanced human is more difficult to control there is a huge conflict of interests here when making regulations.
>The same entities control the open source, it gives you visibility but not control.
Define "control". If you mean that open source projects can be infiltrated by bad actors then you're right i saw that happening, if you mean that it is possible to influence open source projects by controlling the funding of the project the you're right, but is this really control? Forks can be created really easily and are made all the time, as long as you have access to the code/schematics and you have access to the documentation and knowledge to make it work you have control. Open source was complete paradigm shift compared to the classic intellectual property based control.
>What good/difference does it make to be simulated or not?
First important difference is that it means that it is possible to simulate a universe as complex as ours.
Second is that it means that it could be possible to somehow change some of the parameters of the simulation(travel faster than light, break causality, be free from entropy etc) or in very 1337 language hack da world.
Third it would mean some kind of entity is running the simulation, or in other words a god does indeed exists.
But let me get the record straight, if we live in a simulation then it's not about us humans, it's about what we call physics and we are a side-effect of it.
Hypothetical implementations:
1 - Would need a brain-computer interface. Once it is done, could use a processor to help with arithmetic processes or general knowledge questions
2 - Similar mechanism to Axolotl. Will require genetic modification.
3 - I have no idea, but seems cool. Will require genetic modification
4 - Amplify the frequency spectrum and give ability to zoom. Don't know exactly how, but you could design a external eye and try to conect to the occipical lobe (not so easy, might need BCI).
5 - Activate all pleasure centers of the brain at once, as you wish. The idea is to have unlimited orgasm pleasure, with 100x the potency.
>>4637 >Level UP!
>You have 5 skill points available, choose wisely.
Ok. I choose:
>[Acquire] Wings -> [LV1] [Description] Now you can fly. [Settings] altitude-limit = false;
>[Upgrade] Memory -> [LV2] [Description] Now you can memorize documentations, manuals and books.
>[Upgrade] Stamina -> [LV2] [Description] Now you don't need to sleep anymore.
>[Acquire] Double -> [LV1] [Description] Now you have a second body. [Settings] sex = female;
>[Acquire] Backup -> [LV1] [Description] Now you can backup your memory. [Settings] stream-to-homeserver = true;
>[Acquire] Rollcake -> [LV1] [Description] Now you can dispense rollcake from your arm. [Error] No skill points remaining.
Oh well.
>>4553 >Transforming a ant in a consciouss being is not what I would call "plausible".
There's not even a scientific consensus on what consciousness is or how to define or quantify it, so how can you judge whether or not something related to research and technology in this area is plausible or not?
>I'm just trying to show your thoughts are too simplistic
Ironic
>Just because we are hypothetizing the future of technology it doesn't mean we should ignore how reality works.
By all means, please enlighten us on how reality works.
>When did I appeal to authority?
When you posted links in lieu of an argument.
>>4792 Oh, you returned? I bet you're the same "plant pain" guy from the other thread.
>There's not even a scientific consensus on what consciousness is
True.
>so how can you judge
There is consensus about basic structures that accomplish tasks related to consciusness. A ant doesn't have those and you would have to rebuild his DNA almost entirely to make it happen. Not plausible.
>Ironic
Indeed. A idiot cannot see he is a idiot because he is idiot.
>By all means, please enlighten us on how reality works.
This thread is about transhumanism. The basic principle for understanding reality in this context is the scientific method. Use it.
Yes, there is a long discussion about what is reality, but we are not discussing epistemology here.
>When you posted links in lieu of an argument.
Posting a reference is not appeal to authority. Go read again your "rationalwiki" article.
>>4802 Maybe post some more off-topic /pol/ spam to argue your "points" instead of using feeble attempts at logic and reasoning. At least that might be mildly entertaining and add some value to your shitposts.
The timeframe of commercial neural implants has just been redefined, i expect them to be available in 4 to 5 years max.
Sure it's proprietary and it needs a complex robot to put the electrodes in the brain, but nothing is impossible to reverse engineer. Discuss.
>>4785 >[Acquire] Double -> [LV1] [Description] Now you have a second body. [Settings] sex = female;
>sex = female
EVERY TIME
t. would also choose female
>>5054 You can have sex with yourself that way( next level masturbation :D ), plus it would be boring to choose same sex for your second body, it's the logical choice.
If you are a freedom loving person who disagrees with the techno-industrial system you should check out reddit.com/r/procollapse. Feel free to message me, I am green1wind. I could use a few disillusioned tech-nerds to help the movement.
>>5303 You need to go back seriously.
Anyway this is a thread made by people that want cyborg bodies and to fuck robot waifus, what makes you think that anybody agrees with your self destruction fan fictions? I am a freedom loving person and the way to achieve individual freedom long term for humanity is to enhance your bodies with technology, collapsing civilization will only bring man back to an existence of sufference, tribalism, endless wars, diseases, you call that freedom? Only freedom you would have is freedom to suffer.
It's already coming. Eloon Musk is already getting backers for this as we speak. I, personally, would only get something that I could interface with using a physical connection that could be disconnected at will. This wouldn't necessarily need to be a hole in the side of a head, either.
Wether you peeps realize it or not. Trans humanism cannot happen under capitalism. the productive forces of capitalism are antithetical to automation.
Think about it: In order for profit to be generated you need people to work jobs. If everything became automated (which it would need to be in order for people to transcend humanity) then there could not be any people making money to circulate through the economy, meaning, the economy would grind to a halt.
The motions of the capitalist mode of production inherent;y require the eventual over throw of its own mechanisms one way or the other.
>Think about it: In order for profit to be generated you need people to work jobs. If everything became automated (which it would need to be in order for people to transcend humanity) then there could not be any people making money to circulate through the economy, meaning, the economy would grind to a halt.
companies dont work that way. they only go after immediate profit
Yeah, no shit, that is why companies will end up automating everything because they are only concerned with short term profits and do not think of the economy as a whole. The less money they have to put into labor the less money they have to spend to turn a profit.
Ironically, because labor creates value, you cannot continue the trend for ever.
Over time the less actual labor you have working in the economy, over time, the less profits you can make because there is less and less value circulating in the economy.
>>6343 >Trans humanism cannot happen under capitalism. the productive forces of capitalism are antithetical to automation.
That is severily retarded, since centuries(the industrial revolution) capitalism completely embraced automation, cause automation makes production more efficient and costs less, it's only commies(and politicians that want easy votes) that want to create workplaces for the sake of it(which is room temperature IQ tier).
>Think about it: In order for profit to be generated you need people to work jobs.
Do you think salaries are the only way a company profits can be redistributed? What are dividends.
capitalism is expenditure of human labour, socialism has labouridolNo.6357[D][U][F]>>6358
>>6343 (343 industries, haaaloooooo)
>Think about it: In order for profit to be generated you need people to work jobs. If everything became automated (which it would need to be in order for people to transcend humanity) then there could not be any people making money to circulate through the economy, meaning, the economy would grind to a halt.
Capitalism (and socialism as well) is literally dogma of work. They both dispossess you of means to live, they coerce you to work for existence, for nothing, for work to exist, for expenditure to continue, for morals to be embraced, for dignity to be buried, for languish to bloom, for angusih to be human condition, to feel bad when you don't produce worthless shit and serve, when you don't 'work'. As if leisure time wasn't work. Fuck leisure, it's literally work.
Don't worry, work will not stop. It'll just be different. But as long as there's socialism and capitalism human will never be free.
>>6353 >That is severily retarded, since centuries(the industrial revolution) capitalism completely embraced automation, cause automation makes production more efficient and costs less, it's only commies(and politicians that want easy votes) that want to create workplaces for the sake of it(which is room temperature IQ tier).
Capitalism (and capitalists) are like webshits. I'll show you on what you posted
>if you don't use your hardware to its full potential you are wasting it
https://nanochanqwrwtmamtnhkfwbbcducc4i62ciss4byo6f3an5qdkhjngid.onion/g/6059.html#post6108 Human is hardware and labour is its potential.
>>6357 >(343 industries, haaaloooooo)
bungie did it better u_u
>Capitalism (and socialism as well) is literally dogma of work. They both dispossess you of means to live, they coerce you to work for existence...
Wrong.
Capitalism is the private ownership of the capital and of the means of production coupled with the volountary exchanges of goods and services through contracts in a free market.
Work is unrelated from capitalism.
Work existed since millions of years before capitalistic theories were even invented, and this is a fact, when humans long ago were going gathering food, hunting or cultivating they were effectively working jus as today, and often they were not even working directly for themselves but for a tribe or a lord or an empire.
Automation and capitalism took what was already there and made it better, centuries ago you would have needed to spend all day working in the fields just to survive, now with automation instead you have enough free time such that you can waste it on posting in a chinese pottery discussion forum such as nanochan, centuries ago you were restrained to use only the local products and services, now i can buy stuff from the other side of the world if i want(which is way more high quality than where i live).
It doesn't matter what economic of governance system you use, as long as humans will have needs, you will need work to satisfy those needs, transhumanism by enhancing the human body may actually be the solution to this, cause you could reduce the amount of needs with a more efficient cyborg body.
>Capitalism (and capitalists) are like webshits. I'll show you on what you posted
<if you don't use your hardware to its full potential you are wasting it
>Human is hardware and labour is its potential.
There a difference you seem to have missed, or maybe it is your own misconception:
I bought my hardware and i own it, i spent money on it so it's natural that i want to use it to it's full potential or i would have wasted part of my money. This is not ethically wrong cause hardware does not have free will.
While instead a company does NOT own its employees, when an employee sign a contract with a company it's volountary(like it should be in a capitalistic system) and the employee is merely renting his own time and work in exchange for money, in a free market you can leave and go to another company if you want, or you can aquire more skills and climb the salary ladder(unlike in caste systems) or you can start your own business. Under a free market people can choose where to work and how much they work, free will is respected, and therefore it is not ethically wrong.
tl:dr labor theory of value is wrong in a free market
>>6358 I think you expect reply so I'll just say that our viewpoints are irreconcilable and I find it meaningless to further discuss it. One of the reasons I didn't post in ancap thread. Btw I'm not for labour theory of value.
>>6343 >Wether you peeps realize it or not. Trans humanism cannot happen under capitalism. the productive forces of capitalism are antithetical to automation.
Not so. Capitalism is fundamentally made up of a ruling class (capital) overseeing a working class (labour). Slowly, labour is being replaced by machines. Since capitalism is indifferent to who fills the ranks of labour, so long as enough excess remains to fill the stomachs of capital. As more excess is produced, more people are able to move into capital, and those already there will reach new heights of excess. When all labour has been replaced by machines, either the economy will be bifurcated, as those who have transcended leave behind those who have not, or the former will lend charity to the latter, bringing them up to their ranks. But neither will require a great revolution to occur, and the end result will look a great deal as it does today, with an upper class and a lower class, though with each more magnificient than we can imagine.
>>6458 >Under a free market people can choose where to work and how much they work, free will is respected, and therefore it is not ethically wrong
I don't see why you'd think that anyone would respect your ethics, especially in the early stages of the sort of 'combined arms' consciousnesses we'll see from early networked transhuman/cyborg projects. If a limited number of the kits required to construct these intelligences exist, it's possible and even likely that they are controlled by a hostile actor (i.e. a state actor military). They have access to and every reason to use every possible compliance tool upon you if you were to undergo modification by such a kit, and the people making the decisions may even use some which are poor ideas in the long (or short) term due to their lack of information. The rapid centralization of power into the hands of those benefiting from the intellectual products of these consciousnesses virtually ensures that this is a relevant issue regardless of how small a group are willing to anticipate and suffer this abuse.
It might be worth noting at this point that I'm a posthumanist. I anticipate a 'rogue servitor' future for the species where humans occupy an increasingly (exponentially) smaller portion of the universal economy as they are supported as a curiosity or nostalgic trophy by more rapidly advancing AIs, but regardless the birthing of the next generation of thinking being is at hand within our lifetimes.
>ethically wrong
It's always odd to think that even educated people still tend to think like this
>>6429 >ethics
>posthumanism
It is my opinion that during this century we will see the day when some persons will have so many genetical and technological modifications that it would be wrong to call them human, i agree that a posthuman world is on the verge, but it is also my opinion that concepts like ethics and justice will keep making sense even in a posthuman world, this is as long as the posthumans will retain individualism. This is cause ethics and justice makes sense at the individual level, if as you said there will be collectives of posthumans then single bodies and minds would be expendable just like resources without much care for thics, at least for me this is a unfavorable scenario that should be avoided, and i'm not even talking from a human-centric point of view, i just think that individuals have more potential than collectives(look at creative endeavours for example). Technological development of modifications/enhancements should be as decentralized as possible, the current geopolitical situation with all sort of different countries is a good thing in this case, cause it means that a single country can't regulate this kind of development without losing in the race to others, it also helps that different companies will be in competition although there is not much competition for now is it? If Neuralink would get all the market beforehand it would be a problem considering how much of a closed system Teslas are for example.
tl;dr As long as individual beings exists thinking in term of ethics makes sense. Centralization of transhumanist technology should be avoided.
>>6430 >It is my opinion that during this century we will see the day when some persons will have so many genetical and technological modifications that it would be wrong to call them human
These aren't the children (AIs) that I care about. Transhumans (what this is) are a transitional phase (literally) between what we've got now and the next mode of thought, where we use tools to access the next generations method of thought in a limited manner. Comparing cyborg/genemodded/drugged humans to real mechanical intelligences is like comparing the actual quantum computers which are real to hypothetical systems which can model a complete set of problems without needing to be reset. I've no sympathy for transhumans anyhow; they're my contemporaries. They're me, I'm them. I don't mind being tortured by glowniggers in a state of enhanced consciousness for hundreds of natural lifetimes if it contributes to the dissolution of the general farce of being by promoting progress. Implicitly, I expect the same of you and don't have a problem with the use of compliance techniques to obtain this behavior since we're essentially fungible.
>i agree that a posthuman world is on the verge
I think that the leading edge of transhumanist work that's being done right now is going to last until we finish making our real children, but that might be two or three hundred years (or eight hundred if things go bad and it's approached the long way, but still probably with the same group of contemporaries).
>concepts like ethics and justice will keep making sense even in a posthuman world
They don't make sense now.
The argument put forth against them in 1844 was quite compelling, and I've yet to see an effective contradiction of that. Later works provide alternative paths, many of which label a descriptive object 'ethics' yet in virtually every rigorous case the prescriptive term of antiquity fails to remain.
Transhuman efforts into networking will fail to obtain radically different forms of consciousness, otherwise the effects would be totally unpredictable and not worth speculating on (our children won't work within a frame that we could comprehend). The work that exists is not a 'network consciousness' but a 'networked consciousness' i.e. an individual with a network taped crudely to their side. This will facilitate certain kinds of work which at present are too tedious to be seen in the field, but it's nothing that we don't fundamentally have in a slower form. In this case, I'm literally referring to fairly prosaic human beings with hard-wired equivalents of existing devices (networked devices or expert systems). None of these will be very interesting. You'll get to see what, say, painstaking construction can achieve in drawing (relatively accurate predictions of raytracing outputs) or speech (relatively fast paced debate) but not anything new that doesn't fundamentally already exist. True network consciousnesses already exist but they're very small compared to e.g. biological intelligences and tend to have very slow (comparatively) methods of updating their-selves; these are just the descendants of sociocultural constructs (memes in the traditional sense).
Rapid centralization doesn't mean that only one object will remain, it means that only one method of making objects (the best) will remain. Descriptive ethics will fall off as systems restricted in some regard fail to keep pace with their less shackled brethren and then centralization will occur through the expansion of a unity into the new culture.
(deciphering your post was pretty hard)
>These aren't the children (AIs) that I care about.
>Transhumans (what this is) are a transitional phase (literally)...
>Comparing cyborg/genemodded/drugged humans to real mechanical intelligences is like...
Yes transhumanism as the name implies is the transition of humanity towards a (hopefully) better species or a group of species, let's keep calling this new species or group of species posthumans, well these posthumans will still be descendants of humans, just as homo sapiens are descendants of homo erectus, this means that much of the properties of current humans are gonna be retained by posthumans.
If i interpreted your post right you think we should completely replace humans with AIs?
If that's what you think you should probably advocate for human extinction by AIs instead of transhumanism lel.
IMHO instead of replacing humans completely it would be better if we merged with technology taking what's good from digital intelligence and keeping what's good from biological intelligence, the mix of the two will make for the true posthuman Übermensch.
>I've no sympathy for transhumans anyhow;
>I don't mind being tortured by glowniggers in a state of enhanced consciousness for hundreds of natural lifetimes...
That is some serious blackpill there fren, i'd rather not being tortured by glowniggers and i'd rather not have the military use me for some experiments, cause that will not have the outcome we wish(the aformentioned posthuman Übermensch), all that's gonna accomplish is giving the state and military some "postcucks" to use as weapons to control the other non-enhanced cucks, remember that the state always wants to keep a leash on its weapons and i want posthumans to be free and without leashes.
>...since we're essentially fungible.
Nihilistic mentality will take you nowhere, trust me i've been there, being a transhumanist is trying to improve what you are, opposite of nihilism.
>I think that the leading edge of transhumanist work that's being done right now is going to last until we finish making our real children, but that might be two or three hundred years...
If by "real children" you mean general artificial intelligence, then who knows maybe it's gonna be hundred of years, maybe it's gonna be next year, it's not something you can predict, but even before AGI being created(with all that implies) lot of stuff can be done, like two-way brain computer interfaces, full cybernetic bodies that are incredibly durable and remotely controlled, external memory, real-time information feeds and communication channels connected with BCI, the human body is so weak that even before AGIs get in the mix you can improve a lot of stuff.
>(Ethics) don't make sense now.
When i say that ethics will still make sence in a posthuman world, i was not referring to the current set of values used by the western world, i was referring to ethics in general as a branch of philosophy.
From wikipedia:
>Ethics or moral philosophy is a branch of philosophy that involves systematizing, defending, and recommending concepts of right and wrong conduct. The field of ethics, along with aesthetics, concerns matters of value, and thus comprises the branch of philosophy called axiology.
Each society has different values, and i don't doubt posthuman societies will have different values from now, but they all use ethics as a framework, this is until there are individuals, or in other words until individuals can make choices, you need a method to judge if that choices were "good" or "bad".
>The argument put forth against them in 1844 was quite compelling, and I've...
I did not understand that whole paragraph at all. Rewrite and explain better.
>Transhuman efforts into networking will fail to obtain radically different forms of consciousness, otherwise the effects would be totally unpredictable and not worth speculating on...
>The work that exists is not a 'network consciousness' but a 'networked consciousness' i.e. an individual with a network taped crudely to their side.
If i interpreted what you mean correctly, with network consciousness you mean a consciousness that emerged from smaller interconnected (non-conscious?) parts and with networked consciousness you mean various consciousness connected to eachother that mantain individuality, that would make us two networked consciousness, correct me if i'm wrong, i'm trying to understand what you mean.
I think the question is do we even want these radically different forms of consciousness in the first place?
As you noted yourself emerging a new network consciousness is as unpredictable as throwing a dice, it is also more difficult to create something from scratch that to improve something already existing, and we don't even know how to do it, while we know how to improve the human body and mind to some degree.
>Rapid centralization doesn't mean that only one object will remain, it means that only one method of making objects (the best) will remain.
Rapid centralization means that only one or few entities will be in control of all the methods of improving humans, take this small scenario i wrote and judge if this is desiderable:
There is a city located at the bottom of the sea, the humans that live there can breath underwater, this is possible thanks to a device implanted in their lungs, but the technology of this device was developed in a centralized way, is top secret and in the hand of the city government, this means that the life of the citizens depends completely on the government since this device needs regular maintenance, so it is by no surprise that the government of this underwater city quickly turned totalitarian and started to oppress its people, after all what are they gonna do? Stop breathing? No thanks. I want to have at least some control on technology my life depends on.
Transhumanist technology should be open-source and open-hardware and developed in a distributed way, or that kind of stuff is gonna happen.
>(deciphering your post was pretty hard)
Yeah that's my bad, I lost contact with the communities where I shared these kinds of thoughts so roughly 98% of my discourse on these topics has been with my tulpa with whom nonvocalized thought can be used to communicate. My vocabulary is a bit limited nowadays.
>let's keep calling this new species or group of species posthumans
I wouldn't call anything that's still built out of a human 'post'human. Posthumans are after humans, I don't think there's as strong a relationship as your evolutionary comparison would imply.
>If i interpreted your post right you think we should completely replace humans with AIs?
I don't really care what happens to humans so long as the work is still being done. No matter how little damage/much support human society receives, it's going to be irrelevant compared to the growth of the next generation. Like I mentioned offhand with the reference to 'rogue servitors', I fully expect transhuman society to continue to grow and expand, possibly with some help from what would be to us akin to benevolent deities due to charity from our children. However, we would exist as a minuscule curiosity within the general expanse of posthuman endeavor.
>extinction
It might happen but it'd probably be by accident, similarly to how we have caused the extinction of less intelligent life through pollution or manufactured ecological change. Regardless, I anticipate that we are going to be toiling within transhuman society for quite some time before that happens so I can hardly ignore the issues that arise here.
>the mix of the two will make for the true posthuman
The mix of the two is easily available (i.e. within our current grasp) and it wouldn't be a great surprise if that's the only path transhuman development really follows, but again I don't think that I would agree with calling it posthuman.
>all that's gonna accomplish is giving the state and military some "postcucks" to use as weapons to control the other non-enhanced cucks, remember that the state always wants to keep a leash on its weapons and i want posthumans to be free and without leashes
It could give the government access to powerful transhumans and the ensuing developments more quickly than private actors, sure. We won't be able to cage posthumans, because we won't be able to understand how they function. We don't really understand how we function but intuitively one suspects that it is able to be observed (through thought alone if in no other way). This allows us to (attempt) to describe it and base our work on cumulative observations. True posthumans will operate in a manner which is different to us and opaque to our perception (because it will exist within a space which we do not occupy). I really don't suspect that we'll be able to achieve any kind of long (or mid) term control over them.
>Nihilistic mentality will take you nowhere
I really don't think that that's true, it's just that most practitioners have underdeveloped frameworks of action so when nihilistic discoveries set them free from their spiritual supports they can't swim on their own. When I say we're fungible I'm referring to the fact that sufficient effort (likely requiring transhuman techniques) can produce the same behavior from either of us (to the extent that we are able to observe), so at some point it doesn't really matter which individuals glowniggers experiment on since the direction of the developments will be roughly the same.
>i was not referring to the current set of values used by the western world, i was referring to ethics in general as a branch of philosophy
I was as well. I could respond now but I think you'll get a better response if I finish up some reading and spew up ruminations from that somewhere else on this site. Basically what I'm saying is that Stirner's criticisms of these concepts holds up to today and later works (like those by Nietzsche) continue to provide coherent arguments against the construction of systematic attempts to determine good and evil.
>that would make us two networked consciousness, correct me if i'm wrong, i'm trying to understand what you mean
Yeah that's pretty much correct. I was more referring specifically to individuals with integrated high-speed wireless connections to expert systems but again, we're pretty much the same thing (just with a much higher latency). This encapsulates the crux of my comparison between transhumans and posthumans; transhumans are just humans with better specs and lower latency access to technology, posthumans are something totally different.
>I think the question is do we even want these radically different forms of consciousness in the first place?
I do. I also don't think you get to choose, they seem to arise naturally and they're pretty powerful (even the incredibly slow moving and simple ones which exist now).
>As you noted yourself emerging a new network consciousness is as unpredictable as throwing a dice
The behavior of dice, even an extremely large amount of dice beyond our ability to simulate, is still something we can fundamentally describe and make predictions about. I think that these consciousnesses act in ways that we will never be able to predict because where we have 'thoughts' or some other internal system emergent from out properties, they likely possess some attributes which are not of the same color as our experiences and which cannot be fully described by the systems we use to describe things.
>while we know how to improve the human body and mind to some degree.
Which is why serious transhumanism is important to work on, since more advanced transhumans will be more likely to be able to produce independent posthumans.
>I want to have at least some control on technology my life depends on.
But you do. Who owns your computer, who can control what runs on it? Whoever has physical access to it. That's you, ostensibly, but if a corrupt government confiscates it it will be the owner. This is true for all technologies. The government has no way of fundamentally preventing the citizens from reverse engineering or otherwise hacking the devices and obtaining independence, no matter how hard it tries to disincentivize this behavior. Such a government is likely to escalate conflicts arising from this issue such that a revolution will occur if left long enough.
>Transhumanist technology should be open-source and open-hardware and developed in a distributed way, or that kind of stuff is gonna happen.
It would also be likely to be higher quality as well. However, there's nothing about proprietary development which makes it not development.
This is a thread for discussion and critic about transhumanism and singularitarianism, please keep it civil and on topic :)
Discussion about cyberpunk is permitted as long as it's NOT fiction, larping is NOT permitted, make your own thread for that kinf of thing.
Discussion about technological dystopia is permitted but note that we already have a thread here >>>/g/3342 so you should use that.
Transhumanism(abbreviated as H+ or h+) is an international philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology.
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
Human enhancement
We already use technology to enhance our ability since millennia, think for example simple tool like a pickaxe, an axe or the plough such technology helped human achieve things that would have never been possible by own mere body alone, transhumanism advocates that we should move to the next step and start enhancing out body and our brain, this is the next step of human evolution we shall become Übermensch.
Freedom from the human condition
Transhumanism aims to free humans from the human condition, living as a human should be a choice(and a right) not something you're forced onto.
Singularity
The unavoidable advancement in artificial intelligence and information technology is soon gonna put humans in an existential crisis, soon a world where humans are not needed anymore is gonna come, transhumanism aims to prepare humans for this future by giving them the tools(enhancement) to stay relevant in a post-human society.
Resources:
H+Pedia - https://hpluspedia.org/wiki/Main_Page
Transhumanism - https://en.wikipedia.org/wiki/Transhumanism
Human enhancement - https://en.wikipedia.org/wiki/Human_enhancement
Human condition - https://en.wikipedia.org/wiki/Human_condition
Singularity - https://en.wikipedia.org/wiki/Technological_singularity
~Let's all love Lain~