Even if they did hallucinate answers, it wouldn’t be the first game that relies on the “unreliable narrator” trope.
Even if they did hallucinate answers, it wouldn’t be the first game that relies on the “unreliable narrator” trope.
I mean, it would technically be possible to build a computer out or organic and biological live tissue. It wouldn’t be very practical but it’s technically possible.
I just don’t think it would be very reasonable to consider that the one thing making it intelligent is that they are made of proteins and living cells instead of silicates and diodes. I’d argue that such a claim would, on itself, be a strong claim too.
Note that “real world truth” is something you can never accurately map with just your senses.
No model of the “real world” is accurate, and not everyone maps the “real world truth” they personally experience through their senses in the same way… or even necessarily in a way that’s really truly “correct”, since the senses are often deceiving.
A person who is blind experiences the “real world truth” by mapping it to a different set of models than someone who has additional visual information to mix into that model.
However, that doesn’t mean that the blind person can “never understand” the “real world truth” …it just means that the extent at which they experience that truth is different, since they need to rely in other senses to form their model.
Of course, the more different the senses and experiences between two intelligent beings, the harder it will be for them to communicate with each other in a way they can truly empathize. At the end of the day, when we say we “understand” someone, what we mean is that we have found enough evidence to hold the belief that some aspects of our models are similar enough. It doesn’t really mean that what we modeled is truly accurate, nor that if we didn’t understand them then our model (or theirs) is somehow invalid. Sometimes people are both technically referring to the same “real world truth”, they simply don’t understand each other and focus on different aspects/perceptions of it.
Someone (or something) not understanding an idea you hold doesn’t mean that they (or you) aren’t intelligent. It just means you both perceive/model reality in different ways.
Step 1. Analize what’s the possible consequence / event that you find undesirable
Step 2. Determine whether there’s something you can do to prevent it: if there is, go to step 3, if there’s not go to step 4
Step 3. Do it, do that thing that you believe can prevent it. And after you’ve done it, go back to step 2 and reevaluate if there’s something else.
Step 4. Since there’s nothing else you can do to prevent it, accept the fact that this consequence might happen and adapt to it… you already did all you could do given the circumstances and your current state/ability, you can’t do anything about it anymore, so why worry? just accept it. Try and make it less “undesirable”.
Step 5. Wait. Entertain yourself some other way… you did your part.
Step 6. Either the event doesn’t happen, or it happens but you already prepared to accept the consequences.
Step 7. Analyze what (not) happened and how it happened (or didn’t). Try to understand it better so in the future you can better predict / adapt under similar circumstances, and go back to step 1.
The AI can only judge by having a neural network trained on what’s a human and what’s an AI (and btw, for that training you need humans)… which means you can break that test by making an AI that also accesses that same neural network and uses it to self-test the responses before outputting them, providing only exactly the kind of output the other AI would give a “human” verdict on.
So I don’t think that would work very well, it’ll just be a cat & mouse race between the AIs.
It could still be bayesian reasoning, but a much more complex one, underlaid by a lot of preconceptions (which could have also been acquired in a bayesian way).
Even if the result is random, a highly pre-trained bayessian network that has the experience of seeing many puzzles or tests before that do follow non-random patterns might expect a non-random pattern… so those people might have learned to not expect true randomness, since most things aren’t random.
Yes… the chinese experiment misses the point, because the Turing test was never really about figuring out whether or not an algorithm has “conscience” (what is that even?)… but about determining if an algorithm can exhibit inteligent behavior that’s equivalent/indistinguishable from a human.
The chinese room is useless because the only thing it proves is that people don’t know what conscience is, or what are they even are trying to test.
A test that didn’t require a human could theoretically be tested automatically by the machine preemptively and solved easily.
I can’t imagine how would you test this in a way that wouldn’t require a human.
The first thing @seSvxR3ull7LHaEZFIjM said was: “Assange is a bit of a scumbag” …
The closest thing to “righteousness” said was: “his efforts for freedom of information should not land him in US torture prisons like many others.”
Which, being true, it’s absolutely not challenged or contradicted by anything you said in response.
Note that “freedom of information” is totally compatible with “picking and choosing” the manner in which you exercise that freedom. In fact, I’d argue that the freedom of “picking and choosing” what’s published without external pressure is fundamentally what the freedom of press is about.
Assagne (like any other journalist) should have the freedom of “picking and choosing” what facts he wants to expose, as long as they are not fabrications. If they are shown to be intentionally fabricated then that’s when things would be different… but if he’s just informing, a mouthpiece, even if the information is filtered based on an editorial, then that’s just journalism. That’s a freedom that should be protected, instead of attacking him because he’s publishing (or not publishing) this or that.