Did you really just try to excuse and downplay a company claiming full ownership and rights over all user’s data?
Did you really just try to excuse and downplay a company claiming full ownership and rights over all user’s data?
Your claim was that this is “increased awareness to the average person”. How are you mixing “average person” and “Arch extras repo”?
You’re intentionally conflating two separate points I made.
Point #1: the fact that you went out of your way as a result of the lawsuit news to download and try Yuzu proves my point that more people will try it out.
Point #2: the binaries are still available in some of the usual places. For example, it’s still available in the Arch repos.
Those two concepts aren’t directly linked together. And I decided to check out the Suyu progress and they’re making much more than just README and branding changes. They’ll have binaries available soon also.
And like I keep saying, Yuzu wasn’t the only Switch emulator out there. So even if people can’t find Yuzu, they can find the other one which is very much active and available to use. It’s called Ryujinx, btw. It’s a terrible name, but it works.
Yes the project can continue. The original developers, who were obviously best suited to continue it, are gone. I’m sure suyu can do a good job, but I just don’t see how you can call it a positive.
Well, for one thing, I never said it was a positive. I didn’t use that word, nor did I even imply it.
Look at LibreOffice. It was forked from OpenOffice and it has far outpaced OpenOffice to the point that it’s embarrassing that OpenOffice is still being developed. Just because the original core devs are gone means nothing in the long run. Switch emulation isn’t some black magic secret project that only a handful of people know how to do. The biggest hurdle is always the DRM portion, which has long since been cracked. The rest is basic dev stuff.
And in any case. There is another project that’s been around as long as Yuzu and is as equally capable and performant.
I wasn’t interested in Switch emulation before this, but wanted to try out of curiosity when this happened.
This statement literally proves my point.
All the download sites and tutorials are dead, and sketchy alternate downloads cannot be trusted.
The binaries still exist in some repos, like the Arch extras repo.
No it’s not going to have the opposite effect.
It will. The nature of the project will shift from a core team like Yuzu had to a decentralized process. If they avoid the legal pitfalls that killed Yuzu (like donations) then there’s little to nothing that Nintendo can do legally.
There’s already a project forked from Yuzu called Suyu that has a ton of activity on it. To me it looks like all the external contributors have jumped on to that new project and are working on removing all references to Yuzu and they will continue the work.
The dev process has absolutely been temporarily halted and significantly slowed down, but it’s not going to stop.
Best case scenario a different team will take over the project and continue, which is not impossible, but far from a given.
It happened within 24hrs of the news.
More awareness to an abandoned project?
The binaries for Yuzu and all the tutorials still exist. Everything that has worked on Yuzu until now will continue to work forever. The news has simply increased awareness to the average person that you can play Switch games on a computer. People who otherwise would never have known about it.
And all of this completely ignores another still existing Switch emulation project that was just as capable as Yuzu that has existed for just as long.
So yes, it is ABSOLUTELY going to have the opposite effect. At the very “best”, Nintendo won an empty victory.
I really doubt that they are that stupid.
I wasn’t referring to the Yuzu core team.
all the ppl that were directly associated with the group are no longer legally allowed (or at least would risk a lawsuit against them) to contribute. So a lot of expertise got lost.
Sure, the core “Yuzu” team. That doesn’t include any of the external contributors. There’s very often a larger contributor base outside of a core team in FOSS projects.
And yes, there’s expertise that was lost. But that doesn’t mean no one else knows how to do the work. It will march onwards.
The lawsuit against Yuzu is going to have the exact opposite effect they hope.
All it’s doing is increasing public awareness of the project, and because it’s open source it will just sprout more heads like a hydra, and it will live on forever.
OK mman, dont pop a vein over this
That’s incredibly rude. At no point was I angry or enraged. What you’re trying to do is minimize my criticism of your last comment by intentionally making it seem like I was unreasonably angry.
I was going to continue with you in a friendly manner, but screw you. You’re an ass (and also entirely wrong).
A lot of what you said is true.
Since the TPU is a matrix processor instead of a general purpose processor, it removes the memory access problem that slows down GPUs and CPUs and requires them to use more processing power.
Just no. Flat out no. Just so much wrong. How does the TPU process data? How does the data get there? It needs to be shuttled back and forth over the bus. Doing this for a 1080p image with of data several times a second is fine. An uncompressed 1080p image is about 8MB. Entirely manageable.
Edit: it’s not even 1080p, because the image would get resized to the input size. So again, 300x300x3 for the past model I could find.
/Edit
Look at this repo. You need to convert the models using the TFLite framework (Tensorflow Lite) which is designed for resource constrained edge devices. The max resolution for input size is 224x224x3. I would imagine it can’t handle anything larger.
https://github.com/jveitchmichaelis/edgetpu-yolo/tree/main/data
Now look at the official model zoo on the Google Coral website.
Not a single model is larger than 40MB. Whereas LLMs start at well over a big for even smaller (and inaccurate) models. The good ones start at about 4GB and I frequently run models at about 20GB. The size in parameters really makes a huge difference.
You likely/technically could run an LLM on a Coral, but you’re going to wait on the order of double-digit minutes for a basic response, of not way longer.
It’s just not going to happen.
when comparing apples to apples.
But this isn’t really easy to do, and impossible in some cases.
Historically, Nvidia has done better than AMD in gaming performance because there’s just so much game specific optimizations in the Nvidia drivers, whereas AMD didn’t.
On the other hand, AMD historically had better raw performance in scientific calculation tasks (pre-deeplearning trend).
Nvidia has had a stranglehold on the AI market entirely because of their CUDA dominance. But hopefully AMD has finally bucked that tend with their new ROCm release that is a drop-in replacement for CUDA (meaning you can just run CUDA compiled applications on AMD with no changes).
Also, AMD’s new MI300X AI processor is (supposedly) wiping the floor with Nvidia’s H100 cards. I say “supposedly” because I don’t have $50k USD to buy both cards and compare myself.
Ya, that just solidifies that you don’t know how to use the word.
How does using a certain operating system equate to “someone who annoys others by correcting small errors”?
I’m not sure you know how to use that word.
And you can add as many TPUs as you want to push it to whatever level you want
No you can’t. You’re going to be limited by the number of PCI lanes. But putting that aside, those Coral TPUs don’t have any memory. Which means for each operation you need to shuffle the relevant data over the bus to the device for processing, and then back and forth again. You’re going to be doing this thousands of times per second (likely much more) and I can tell you from personal experience that running AI like is painfully slow (if you can get it to even work that way in the first place).
You’re talking about the equivalent of buying hundreds of dollars of groceries, and then getting everything home 10km away by walking with whatever you can put in your pockets, and then doing multiple trips.
What you’re suggesting can’t work.
ATI cards (while pretty good) are always a step behind Nvidia.
Ok, you mean AMD. They bought ATI like 20 years ago now and that branding is long dead.
And AMD cards are hardly “a step behind” Nvidia. This is only true if you buy the 24GB top card of the series. Otherwise you’ll get comparable performance from AMD at a better value.
Plus, most distros have them working out of the box.
Unless you’re running a kernel <6.x then every distro will support AMD cards. And even then, you could always install the proprietary blobs from AMD and get full support on any distro. The kernel version only matters if you want to use the FOSS kernel drivers for the cards.
Two* GPUs? Is that a thing? How does that work on a desktop?
I’ve been using two GPUs in a desktop since 15 years ago. One AMD and one Nvidia (although not lately).
It really works just the same as a single GPU. The system doesn’t really care how many you have plugged in.
The only difference you have to care about is specifying which GPU you want a program to use.
For example, if you had multiple Nvidia GPUs you could specify which one to use from the command line with:
CUDA_VISIBLE_DEVICES=0
or the first two with:
CUDA_VISIBLE_DEVICES=0,1
Anyways, you get the idea. It’s a thing that people do and it’s fairly simple.
getting a few CUDA TPUs
Those aren’t “CUDA” anything. CUDA is a parallel processing framework by Nvidia and for Nvidia’s cards.
Also, those devices are only good for inferencing smaller models for things like object detection. They aren’t good for developing AI models (in the sense of training). And they can’t run LLMs. Maybe you can run a smaller model under 4B, but those aren’t exactly great for accuracy.
At best you could hope for is to run a very small instruct model trained on very specific data (like robotic actions) that doesn’t need accuracy in the sense of “knowledge accuracy”.
And completely forgot any kind of generative image stuff.
Are CUDAs something that I can select within pcpartpicker?
I’m not sure what they were trying to say, but there’s no such thing as “getting a couple of CUDA’s”.
CUDA is a framework that runs on Nvidia hardware. It’s the hardware that will have “CUDA cores” which are large amounts of low power processing units. AMD calls them “stream processors”.
You could also completely forego the GPU and get a couple of CUDAs for a fraction of the cost.
What is this sentence? How do you “get a couple of CUDA’s”?
I may be a linux nerd and pedantic
There’s nothing pedantic about using Arch. There’s a reason it and its derivatives are so popular.
maybe checkout EndeavourOS
After about a decade of being exclusively on Ubuntu I got fed up with it and moved to EndeavourOS and I love it.
Although I am being tempted by the NixOS crowd, right now I’m perfectly happy with EndeavourOS.
If it doesn’t work when your internet is out, then it’s not local.