Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.

  • 5 Posts
  • 1.51K Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle

  • Check out this one for a general overview:

    https://youtu.be/OFS90-FX6pg

    You may want to also check an intro to neural networks, and Q* is a somewhat new concept. Other than that… “the internet”. There are plenty of places with info, not sure if there is a more centralized and structured one.

    Learning to code with just ChatGPT is not the best idea. You need to join three areas:

    • general principles (data structures, algorithms, etc)
    • language rules (best described in a language reference)
    • business logic (computer science, software engineering, development patterns, etc)

    ChatGPT’s programming answers, give you an intersection of all those, often with some quirks, with the nice but only benefit of explaining what it thinks it is doing. You still need to have some basic understanding of those in order to understand what ChatGPT is talking about, how to double-check it, and how to look for more info. It can be a great timesaver as a way to generate drafts, though.




  • It’s not a statistical method anymore. One of the breakthroughs of large model neural networks, has been that during training an emergent process, assigns neurons to both relatively high level and specific traits, which at the same time “cluster up” with other neurons assigned to related traits. Adding just a bit of randomness (“temperature”) allows the AI to jump from activating one trait to a close one, but not to one too far away. Confidence becomes a measure of how close is the output, to a consistent set of traits trained into the network. Interestingly, a temperature of 0 gives a confidence of 100%… but produces gibberish.

    If its data contains a commonly held belief, that is incorrect

    This is where things start to get weird. An AI system based on an LLM, can iterate over its own answers looking for the optimal one (Q*), and even detect inconsistencies in them. What it does after that, depends on whoever programmed it:

    • Maybe it casts any doubt aside, and outputs the first answer anyway (original ChatGPT did that, didn’t even bother self-checking too much)
    • Or it could ask an authoritative source (ChatGPT plugins work like that)
    • Or it could search the web for additional info (Copilot and Gemini do that)
    • Or it could alert the user to both the low confidence and the inconsistencies (…but people want omniscient AIs, not “err… I’m not sure, Dave” AIs)
    • …or, sometime in the future (or present?) they could re-train themselves, maybe via generating a LoRa, that would bring in corrected biases, or even additional concepts.

    Over time, I think different AI systems will evolve to target accuracy, consistency, creativity, etc. Current systems are kind of rudimentary compared to what’s yet to come, and too many are used in very rudimentary ways by anyone who can slap an “AI” label and sell them.


  • No pictures of kids for example

    Meaning, an AI blind to kids.

    Keep in mind that training data is required for both recognition, and generation. Legislating that kids “It doesn’t look like anything to me”, leads to things like:

    • Cars that don’t stop for “It doesn’t look like anything to me”
    • Spam filters that don’t stop porn, or gore, or both, of “It doesn’t look like anything to me”
    • Photo storage that erases empty photos which “It doesn’t look like anything to me”

    For porn specific AIs, don’t allow users to upload custom images

    Not sure how you think AIs work, but anyone can train a LoRa on their own laptop, no “uploading” to anywhere required.

    Companies clearly can’t be trusted to put in safeguards for themselves, so I guess it is time for legislation.

    Cool, and I agree with that. I just think that example is horrific (for starters, it would make Lemmy’s anti-CSAM filter illegal, since it’s trained on pictures of kids).

    Got any other proposals?


  • This is generally true… but when we moved with a bunch of cats from an apartment into another apartment on the ground floor less than 500m away, just as I was showing one of the cats around, he jumped off of my arms, went ballistic across the terrace, jumped the wall, another wall, across a street, and wet up a cliff before I could do anything.

    He stayed in a 1Km radius, so after a week and something, some kids recognized him, and I got to climb onto a precarious bunch of overgrowth on top of a cliff, to finally get him back

    Moral of the story: he ran away all scared, but didn’t know how to get back in, so just stayed around… it’s important to make sure the cat knows how to get into the home, not just find the way home but run away again when a dog or whatever scares them.


  • The current state of AI chatbots, assigns a “confidence level” to every piece of output. It signals perfectly well when and where they should look for more information… but humans have been pushing them to “output something, anything”, instead of excusing itself for not knowing something, or running some additional processes in order to look for the missing information.

    As of this year, Copilot has been running web searches to complement its lack of information, and Gemini is running both web searches, and iteratively self-checking its own answer in order to refine it (see “drafts”). It also seems like Gemini might be learning from humanity’s reactions to its wrong answers.



  • “Porn made of me”? You mean, by paying me to sign an agreement, or by drugging and/or forcing me…? Just to be perfectly clear: I’m not a photo.

    The video game doesn’t produce anything.

    Are we talking about the game’s video capture, or the feeling of wanting to puke onto that piece of shit until it drowns?

    What do you propose reduces… porn fakes?

    Something like “teaching your brat”. Porn fakes don’t even become a problem until they get distributed to others. Adults can go to jail, it works on some.

    My problem with machine learning porn is that it’s artless generic template spam clogging up my feed

    That… has more to do with tagging and filtering, rather than anything mentioned above.

    It’s also somewhat weird to diss the “template” of an AI output, when porn videos have settled on a template script for about half a century already. If anything, I’ve seen more variety from people shoving their prompts into some AI, than from porn producers all my life (japanese “not-a-porn” ingenuity excluded).




  • Not exactly.

    LLMs are predictive-associative token algorithms with a degree of randomness and some self-reflection. A key aspect is that anything can be a token, they can self-feed their own output, creating the basis for a thought cycle, as well as output control input for other algorithms. It remains to be seen whether the core of “(human) intelligence” is much more than that, and by how much.

    Stable Diffusion is a random image generator that refines its output based on perceptual traits associated with a prompt. It’s like a “lite” version of human dreaming, only with a super-human training set. Kind of an “uncanny valley” version of dreaming.

    It just so happens that both algorithms have been showcased at about the same time, and it’s the first time we can build a “set and forget” AI system that can both make decisions about its own next steps, and emulate human creativity… which has driven the hype into overdrive.

    I don’t think we’ll stop hearing about it, but I do think there is much more to be done, and it’s pretty much impossible to feed any of the algorithms with human experience data, without registering at least one human learning cycle, as in over many years from inside a humanoid robot.