• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle

  • sLLiK@lemmy.mltoProgrammer Humor@lemmy.mlvim
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    At times, I’ve also juggled (in addition to vim and tmux) hotkeys for my current tiling WM of choice and extra hotkeys to swap between machines via barrier. I’m not sure how I’m able remember what I had for breakfast, much less someone’s name.



  • Honestly? This hole in the wall food store in my home town managed to pick up a pretty early release of the arcade game Robotron. I was instantly enthralled, visiting arcades any time I could. From there, I played on friends’ Atari 2600s and Commodores until I managed to get my own C64, and I’ve never stopped since. From there, I migrated through their products and stayed a diehard fan till the mid-90’s - C128, Amiga 1000, Amiga 500, and Amiga 2000.

    I played a few early x86 games on demo machines in stores, but I didn’t finally relent and build my own x86 rig until the release of the Descent 1 demo, which single-handedly destroyed all of my remaining resolve. I already considered myself a pretty consistent gamer, but that was the nail in the coffin. The rest, as they say, is history. It was only 4 years later that EverQuest came out, too, and that swallowed me whole.


  • Legit. Piracy related to home PC software has been around since the advent of home PCs. Before the concept of LANfests or LAN parties even existed, there were copy parties. I still have vivid memories of 8+ 1541 drives daisy-chained to a single C64. University servers hosting warez… Usenet… there’s likely earlier examples I’m not aware of.

    Before that, people were hacking phone systems in order to call long distance for free. This ain’t nothin new.

    Not something I’ve indulged in for 30+ years, though. I pay for everything, now. Guilty conscience, I suppose. 😁



  • This is the most insidious conundrum related to AI usage. At the end of the day, a LLM’s top priority is to ensure that your question is answered in a way that satisfies that model. The accuracy of its answers are a secondary concern. If forced to choose between making up BS so it can have a response that looks right versus admitting it doesn’t have enough information to answer, it can and often will choose the former. Thus the “hallucination” problem was born.

    The chance of getting your answer lightly sprinkled with made up stuff is disturbingly high. This transfers the cognitive load of the AI user from “what is the answer” to “I must repeatedly go verify everything in this answer because I can’t trust it”.

    Not an insurmountable obstacle, and they will likely solve it sooner rather than later, but AI right now is arguably the perfect extension of the modern internet - take absolutely everything you read with at least a grain of salt… and keep a pile of salt cubes close by.