Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.

  • 4 Posts
  • 1.3K Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle

  • Check out this one for a general overview:

    https://youtu.be/OFS90-FX6pg

    You may want to also check an intro to neural networks, and Q* is a somewhat new concept. Other than that… “the internet”. There are plenty of places with info, not sure if there is a more centralized and structured one.

    Learning to code with just ChatGPT is not the best idea. You need to join three areas:

    • general principles (data structures, algorithms, etc)
    • language rules (best described in a language reference)
    • business logic (computer science, software engineering, development patterns, etc)

    ChatGPT’s programming answers, give you an intersection of all those, often with some quirks, with the nice but only benefit of explaining what it thinks it is doing. You still need to have some basic understanding of those in order to understand what ChatGPT is talking about, how to double-check it, and how to look for more info. It can be a great timesaver as a way to generate drafts, though.


  • It’s not a statistical method anymore. One of the breakthroughs of large model neural networks, has been that during training an emergent process, assigns neurons to both relatively high level and specific traits, which at the same time “cluster up” with other neurons assigned to related traits. Adding just a bit of randomness (“temperature”) allows the AI to jump from activating one trait to a close one, but not to one too far away. Confidence becomes a measure of how close is the output, to a consistent set of traits trained into the network. Interestingly, a temperature of 0 gives a confidence of 100%… but produces gibberish.

    If its data contains a commonly held belief, that is incorrect

    This is where things start to get weird. An AI system based on an LLM, can iterate over its own answers looking for the optimal one (Q*), and even detect inconsistencies in them. What it does after that, depends on whoever programmed it:

    • Maybe it casts any doubt aside, and outputs the first answer anyway (original ChatGPT did that, didn’t even bother self-checking too much)
    • Or it could ask an authoritative source (ChatGPT plugins work like that)
    • Or it could search the web for additional info (Copilot and Gemini do that)
    • Or it could alert the user to both the low confidence and the inconsistencies (…but people want omniscient AIs, not “err… I’m not sure, Dave” AIs)
    • …or, sometime in the future (or present?) they could re-train themselves, maybe via generating a LoRa, that would bring in corrected biases, or even additional concepts.

    Over time, I think different AI systems will evolve to target accuracy, consistency, creativity, etc. Current systems are kind of rudimentary compared to what’s yet to come, and too many are used in very rudimentary ways by anyone who can slap an “AI” label and sell them.


  • The current state of AI chatbots, assigns a “confidence level” to every piece of output. It signals perfectly well when and where they should look for more information… but humans have been pushing them to “output something, anything”, instead of excusing itself for not knowing something, or running some additional processes in order to look for the missing information.

    As of this year, Copilot has been running web searches to complement its lack of information, and Gemini is running both web searches, and iteratively self-checking its own answer in order to refine it (see “drafts”). It also seems like Gemini might be learning from humanity’s reactions to its wrong answers.



  • “Porn made of me”? You mean, by paying me to sign an agreement, or by drugging and/or forcing me…? Just to be perfectly clear: I’m not a photo.

    The video game doesn’t produce anything.

    Are we talking about the game’s video capture, or the feeling of wanting to puke onto that piece of shit until it drowns?

    What do you propose reduces… porn fakes?

    Something like “teaching your brat”. Porn fakes don’t even become a problem until they get distributed to others. Adults can go to jail, it works on some.

    My problem with machine learning porn is that it’s artless generic template spam clogging up my feed

    That… has more to do with tagging and filtering, rather than anything mentioned above.

    It’s also somewhat weird to diss the “template” of an AI output, when porn videos have settled on a template script for about half a century already. If anything, I’ve seen more variety from people shoving their prompts into some AI, than from porn producers all my life (japanese “not-a-porn” ingenuity excluded).




  • Not exactly.

    LLMs are predictive-associative token algorithms with a degree of randomness and some self-reflection. A key aspect is that anything can be a token, they can self-feed their own output, creating the basis for a thought cycle, as well as output control input for other algorithms. It remains to be seen whether the core of “(human) intelligence” is much more than that, and by how much.

    Stable Diffusion is a random image generator that refines its output based on perceptual traits associated with a prompt. It’s like a “lite” version of human dreaming, only with a super-human training set. Kind of an “uncanny valley” version of dreaming.

    It just so happens that both algorithms have been showcased at about the same time, and it’s the first time we can build a “set and forget” AI system that can both make decisions about its own next steps, and emulate human creativity… which has driven the hype into overdrive.

    I don’t think we’ll stop hearing about it, but I do think there is much more to be done, and it’s pretty much impossible to feed any of the algorithms with human experience data, without registering at least one human learning cycle, as in over many years from inside a humanoid robot.










  • Imaginary grenades.

    Check out the gaming industry, where a kid from the other side of the world can tell you how they will “kill your mom, r*pe your sister, and make you watch” (sic), just before killing and teabagging your corpse in-game. At some point the “it’s just a game” also stops holding water… and yet somehow most people are capable of differentiating the “tool” from the piece of shit using it.

    Regarding DUI laws, they’re also wrong, focusing on the effect instead of on the cause. AI is not the cause for generating deep fakes, just like DUI is not the cause for getting drunk, and games are not the cause for being a piece of shit.

    Ain’t it interesting how coming up with a consistent framework, makes it applicable to different areas of life?



  • That’s a lack of vision.

    I want… people to put on a VR set, and fuck off to their fantasy world instead of messing up IRL. I want creepy incels, rapists, and similar, to have a means to act out their creepiness without impacting real people’s lives. Even more, I want them to prefer their virtual fantasies, because they can control them with a push of a button, instead of brainwashing, gaslighting, grooming, drugging, and finding other ways to control real people.

    The faster that AI+VR gets more realistic and easier to use, the better for everyone.