Lol ads that can be engineered into DNA, so that they can be passed down for generations.
Lol ads that can be engineered into DNA, so that they can be passed down for generations.
doesn’t seem to be so comfortable with glasses, esp with a hoodie unfortunately
Hold up, are you sure you can’t view Discussions or Wiki? Which sites can you not view them?
I’m fine viewing them for public repos that I usually visit.
Asking to make sure that Github is not slowly rolling out this lockdown.
“Bad” can be quite broad and it might be cumbersome to check and categorize all of the “badness” out there. You might have better luck narrowing down a bit. For example, if you’re interested in AI/algorithm incidences, there are at least two that came up on search:
On a tangential note of another comment about AI training and such, this is a touchy and evolving subject, but it might be good to include how you want your content to be used and not be used, and by whom, especially if you intend to make them public.
some wiki backends allow password protection. for example, mkdocs, which also renders markdown, has mkdocs-encryptcontent-plugin to allow global or even page-specific passwords for private repos.
but these encrypted pages would of course have the risk of not being archived by the wayback machine.
How is the entity or power that has the ability to grant me such knowledge connected to the existence of the universe?
can people not use that to take each other’s shops down?
the whole premise of OP is that this monitors people, and many organizations use TOTP, which one could also use without internet connections or phones AFAIK.
I’m in academia and I wish this is implemented more. Data breaches are getting quite common, and Github is so entwined in software engineering that it is critical to increase security measures.
or maybe most of them in a folder? and one file that defines their locations for environment variables
what are the other alternatives to ENV that are more preferred in terms of security?
yeah I guess maybe the formatting and the verbosity seems a bit annoying? Wonder what the alternatives solution could be to better engage people from mastodon, which is what this bot is trying to address.
edit: just to be clear, I’m not affiliated with the bot or its creator. This is just my observation from multiple posts I see this bot comments on.
I’m curious, why is this bot currently being downvoted for almost every comment it makes?
Thanks for the suggestions! I’m actually also looking into llamaindex for more conceptual comparison, though didn’t get to building an app yet.
Any general suggestions for locally hosted LLM with llamaindex by the way? I’m also running into some issues with hallucination. I’m using Ollama with llama2-13b and bge-large-en-v1.5 embedding model.
Anyway, aside from conceptual comparison, I’m also looking for more literal comparison, AFAIK, the choice of embedding model will affect how the similarity will be defined. Most of the current LLM embedding models are usually abstract and the similarity will be conceptual, like “I have 3 large dogs” and “There are three canine that I own” will probably be very similar. Do you know which choice of embedding model I should choose to have it more literal comparison?
That aside, like you indicated, there are some issues. One of it involves length. I hope to find something that can build up to find similar paragraphs iteratively from similar sentences. I can take a stab at coding it up but was just wondering if there are some similar frameworks out there already that I can model after.
how bout baserow.io or nocodb cloud? Haven’t used them but I think they’re open source. But they don’t have mobile apps AFAIK for editing.
I wish for a new genie that grants wishes successfully but never tries or succeeds in cursing my wishes.
I think many have also been wondering about version control of legislation/law documents for some time as well. But I never understand why it’s not realized yet.
i’m leaning towards “skull” tho
This is actually an interesting question. First thing to note is that any estimation is by accounts, not by actual people (one person can have multiple alts on both). Honestly I don’t think it’s possible to have meaningful estimation.
That said, I think the first task is to figure out if we can estimate the number of accounts deleted on Reddit during the controversial period (let’s say April when the API change was starting) up til now.
I’m not aware whether there’s public daily data on it from Reddit, but there have been attempts at archiving reddit during this time and of course before. So one can theoretically use the archives to find out “all” existing users. And check the links now via browser (or curl) to see if they still exist, treat that as a good-enough proxy for deleted account.
One may get an estimate of when they were deleted by checking the links in the archives if possible. If not, there’s also Wayback machine that we may use to get a sense, but there are limitations of that.
Lemmy tracks account registration daily, I believe. I don’t know what stats one needs to run but maybe if we can line up the time series of account creation on Lemmy and account deletion on Reddit, we might have some sense of what a lower bound is for those who jumped ship forever.
sounds like this can be a plot of a new Pixar movie