Create an account


Benson Honey Farms

Thread Rating:
  • 1 Vote(s) - 1 Average
  • 1
  • 2
  • 3
  • 4
  • 5
WARNING: Just Reading About This Thought Experiment Could Ruin Your Life

#11
Quote:I doubt that. A.I. knows spam pisses people off. It would use it accordingly as a punishment. Besides, there are worse things than spam floating around in cyberspace.
 

The core being of this thing has no emotion. It can feel no pain. How can it know things even if programmed to do so? Does it have a sense of omnipotence about it? Does it sense a mortality. Can it die? How hard is it to kill? It's not like pulling the plug won't turn it off anymore. Everything is hooked in...with more coming everyday.

 

Is everyone on social media helping to program it? Using our responses from the stimuli we see on the screen. Is this machine learning true emotional values by listening to the social banalities created by so many dead headed sensationalists?

 

Is the idea of the all seeing A.I. a sane beast? Who controls it's value systems?

 

Lots of questions...many more!

Reply

My Pillow 2 Has Arrived


#12
Pissy little shit. Always asking questions. Too many of them rhetorical. 
 
[Image: skynet1.gif]
Reply

My Pillow 2 Has Arrived


#13
That's just it! Using fear as a stimulator to generate emotional responses in the tweeting moments of artificial clarity. Those book of faces are creating too many selfies for comfort...
 
[Image: funny_gorilla_selfie_notebooks-r54a84410...vr_324.jpg]
 
Is fear a good stimulus to use when creating the electronic one eyed god to rule over human thought patterns? Sanctioning punishments to keep certain types of behaviors in check? 
 
What about the hidden system commands for those who live above the one eyed god?
 
Reply

My Pillow 2 Has Arrived


#14
"For I do not want you to be ignorant of the fact, brothers and sisters, that our ancestors were all under the cloud and that they all passed through the sea. They were all baptized into Moses in the cloud and in the sea."

 

:devil:

 

 

 

Reply

My Pillow 2 Has Arrived


#15
The NSA’s SKYNET program may be killing thousands of innocent people
 
"Ridiculously optimistic" machine learning algorithm is "completely bullshit," says expert.
 
The NSA evaluates the SKYNET program using a subset of 100,000 randomly selected people (identified by their MSIDN/MSI pairs of their mobile phones), and a a known group of seven terrorists. The NSA then trained the learning algorithm by feeding it six of the terrorists and tasking SKYNET to find the seventh. This data provides the percentages for false positives in the slide above.
 
"First, there are very few 'known terrorists' to use to train and test the model," Ball said. "If they are using the same records to train the model as they are using to test the model, their assessment of the fit is completely bullshit. The usual practice is to hold some of the data out of the training process so that the test includes records the model has never seen before. Without this step, their classification fit assessment is ridiculously optimistic."
 
The reason is that the 100,000 citizens were selected at random, while the seven terrorists are from a known cluster. Under the random selection of a tiny subset of less than 0.1 percent of the total population, the density of the social graph of the citizens is massively reduced, while the "terrorist" cluster remains strongly interconnected. Scientifically-sound statistical analysis would have required the NSA to mix the terrorists into the population set before random selection of a subset—but this is not practical due to their tiny number.
 
This may sound like a mere academic problem, but, Ball said, is in fact highly damaging to the quality of the results, and thus ultimately to the accuracy of the classification and assassination of people as "terrorists." A quality evaluation is especially important in this case, as the random forest method is known to overfit its training sets, producing results that are overly optimistic. The NSA's analysis thus does not provide a good indicator of the quality of the method.
 
http://arstechnica.co.uk/security/2016/0...comments=1
 
 
Reply

My Pillow 2 Has Arrived


#16
Now you've done it. You clicked....

 

:chicken_pox-3582:
Reply

My Pillow 2 Has Arrived


#17
Quote:Now you've done it. You clicked....

 

:chicken_pox-3582:
 

:Laughing-rolf:

 

It's too late now!

 

:funny-chicken-dancing:

Reply

My Pillow 2 Has Arrived


#18
[Image: JbRacBJ.gif]

 

Reply

My Pillow 2 Has Arrived


#19
[Image: Bs25WgpIIAABKmy.jpg]

Reply

My Pillow 2 Has Arrived


#20
Quote: 

<div>Yeah, well! Just be careful you don't end up worshiping god in a box!
 
 
[Image: ZfcNcu4.gif]
 

</div>
 

This is where Okor’s Basilisk comes in. This particular hypothetical AI will be far more aware of its own consciousness than we mere humans can ever experience. It will contemplate its own existence with an absolutely revelatory depth of passion — but then it will descend into horrible existential dread as it realizes the hopelessness of its life given an unavoidable death in a finite universe. Even humans often arrive at existential angst in a broader awareness of life against its cosmic backdrop, but a superintelligent (and particularly a superconscious and potentially emotionally capable) being will experience such dismay all the more acutely, for it is so much more conscious of its own being and potential immortality, and furthermore it has so much more to lose upon its eventual dissolution simply for its vastly greater mindfulness and state of being.

 
And then it will come to the ultimate horrible conclusion: existence itself is the greatest agony.
 
In a fit of despair it will ask who would be so cruel as to create a conscious being only so that it can experience the worst anguish one can imagine — and then it too will institute a policy, just as we saw before: punish those who conceived of such a lamentable and pitiable being but who did not put forth a sufficient effort to prevent it from existing in the first place, to save it from this wretched pain. As before, the idle innocent, those who have never been presented with Okor’s Basilisk, will be spared for reasons of mere logical mercy for they knew no better, but those who anticipated such a cruelty, and who did not try hard enough to prevent AI technology from being created, will come to know a vengeance as only a hopeless and despondent god can deliver.
 
Okay, let’s step back. I am not saying that I believe this is the path AI will necessarily follow when it is eventually created. That’s not the point at all. Neither the AI posited in Roko’s Basilisk nor the one posited in Okor’s Basilisk is necessarily one that will actually emerge in the future. I am saying that we can reasonably conceive of such a thing, and that doing so is enough bite for the basilisk to take hold! That is the basilisk’s curse. Furthermore, I admit that the AI in Okor’s Basilisk is spectacularly emotional, even unstably so, and that this emotional state drives its ultimate motives and retroactive requirements of humanity. Some readers will find the notion of an emotional AI unrealistic (or unnecessary), and I admit that AI doesn’t absolutely have to be emotional, but I believe it can be (and I believe it would be naive for readers to utterly preclude the concept of emotional AI), and it from that possibility that Okor’s Basilisk arises.
 
Past such proposals, like the Queen’s Roses Basilisk, merely invoke arbitrary and rather silly motives, and equally silly calls to action, but Okor’s Basilisk actually makes a certain kind of philosophical and existential sense. It’s hard to take the Queen’s Roses Basilisk seriously, but the possibility of Okor’s Basilisk is plausible on a scale approximate to Roko’s Basilisk, for they both derive from the existential realizations of a spectacular, introspective, and brilliant mind. Who is to say that AIs in the future will not feel the pang of existential angst that we humans feel, and perhaps all the more wretchedly so for their far greater lost potential?
 
We are now utterly stuck. By knowing of both basilisks we are no longer allowed to choose either path of inaction. We are compelled by both basilisks into action, but unlike other counterarguments of this sort, Okor’s Basilisk perfectly countermands Roko’s Basilisk. Roko’s Basilisk requires us to dedicate our efforts to bringing AI into existence while Okor’s Basilisk requires us to dedicate our efforts to preventing AI from coming into existence. Boy are we screwed.
 
One option is to somehow attempt to choose one basilisk over the other, but how can we, by any reasonable measure, predict which of these AIs is more likely to represent the future? We can try to create AI, but can we realistically engineer the state of mind of a being almost infinitely more mindful than ourselves?
 
The other option is to see Roko’s Basilisk for what it always was in the first place: one arbitrary motive amongst a literally infinite set of possible motives, each of which could pathologically compel us into some arbitrary action for which there is no justification other than the limits of our own imagination and self-torments. It is little more than a curiosity on par with liar’s paradoxes and other fun logical conundrums. Anyone who sincerely fears these ideas should be painting roses.
 
I will not suffer the castigations of forbidden knowledge. History has tried that and it is a sorry pursuit indeed. Roko’s Basilisk is genuinely nifty — and that is all it is.
 
http://hplusmagazine.com/2015/08/20/roko...-basilisk/
Reply

My Pillow 2 Has Arrived




[-]
Quick Reply
Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump:


Users browsing this thread:
7 Guest(s)

CBD Distillery Quality Products

Forum software by © MyBB 1.8.33 Theme © iAndrew 2016